Compare commits

...

301 Commits

Author SHA1 Message Date
ccurme
c74dfff836 deepseek: temporarily bypass tests (#30423)
Deepseek infra is not stable enough to get through integration tests.

Previous two attempts had two tests time out, they both pass locally.
2025-03-21 17:08:35 +00:00
ccurme
7147903724 deepseek: release 0.1.3 (#30422) 2025-03-21 16:39:50 +00:00
Andras L Ferenczi
b5f49df86a partner: ChatDeepSeek on openrouter not returning reasoning (#30240)
Deepseek model does not return reasoning when hosted on openrouter
(Issue [30067](https://github.com/langchain-ai/langchain/issues/30067))

the following code did not return reasoning:

```python
llm = ChatDeepSeek( model = 'deepseek/deepseek-r1:nitro', api_base="https://openrouter.ai/api/v1", api_key=os.getenv("OPENROUTER_API_KEY")) 
messages = [
    {"role": "system", "content": "You are an assistant."},
    {"role": "user", "content": "9.11 and 9.8, which is greater? Explain the reasoning behind this decision."}
]
response = llm.invoke(messages, extra_body={"include_reasoning": True})
print(response.content)
print(f"REASONING: {response.additional_kwargs.get('reasoning_content', '')}")
print(response)
```

The fix is to extract reasoning from
response.choices[0].message["model_extra"] and from
choices[0].delta["reasoning"]. and place in response additional_kwargs.
Change is really just the addition of a couple one-sentence if
statements.

---------

Co-authored-by: andrasfe <andrasf94@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-21 16:35:37 +00:00
Vadym Barda
4852ab8d0a core[patch]: more tests for trim_messages (#30421) 2025-03-21 16:19:52 +00:00
ccurme
e8e3b2bfae ollama: release 0.3.0 (#30420) 2025-03-21 15:50:08 +00:00
Jojo
8f300740ed docs: fix several typos in docs/docs/how_to/split_html.ipynb (#30407)
Fix several typos in docs/docs/how_to/split_html.ipynb
* `structered` should be `structured`
* `signifcant` should be `significant`
* `seperator` should be `separator`
2025-03-21 11:46:26 -04:00
Jojo
c77ee99980 docs: fix typo in chat_history.ipynb (#30406)
`peristence` should be `persistence`
2025-03-21 11:45:52 -04:00
Jojo
f657b19a24 docs: Fix typo in chat_history.ipynb (#30405)
`repsonse` should be `response`
2025-03-21 11:45:31 -04:00
Bob Merkus
5700646cc5 ollama: add reasoning model support (e.g. deepseek) (#29689)
# Description
This PR adds reasoning model support for `langchain-ollama` by
extracting reasoning token blocks, like those used in deepseek. It was
inspired by
[ollama-deep-researcher](https://github.com/langchain-ai/ollama-deep-researcher),
specifically the parsing of [thinking
blocks](6d1aaf2139/src/assistant/graph.py (L91)):
```python
  # TODO: This is a hack to remove the <think> tags w/ Deepseek models 
  # It appears very challenging to prompt them out of the responses 
  while "<think>" in running_summary and "</think>" in running_summary:
      start = running_summary.find("<think>")
      end = running_summary.find("</think>") + len("</think>")
      running_summary = running_summary[:start] + running_summary[end:]
```

This notes that it is very hard to remove the reasoning block from
prompting, but we actually want the model to reason in order to increase
model performance. This implementation extracts the thinking block, so
the client can still expect a proper message to be returned by
`ChatOllama` (and use the reasoning content separately when desired).

This implementation takes the same approach as
[ChatDeepseek](5d581ba22c/libs/partners/deepseek/langchain_deepseek/chat_models.py (L215)),
which adds the reasoning content to
chunk.additional_kwargs.reasoning_content;
```python
  if hasattr(response.choices[0].message, "reasoning_content"):  # type: ignore
      rtn.generations[0].message.additional_kwargs["reasoning_content"] = (
          response.choices[0].message.reasoning_content  # type: ignore
      )
```

This should probably be handled upstream in ollama + ollama-python, but
this seems like a reasonably effective solution. This is a standalone
example of what is happening;

```python
async def deepseek_message_astream(
    llm: BaseChatModel,
    messages: list[BaseMessage],
    config: RunnableConfig | None = None,
    *,
    model_target: str = "deepseek-r1",
    **kwargs: Any,
) -> AsyncIterator[BaseMessageChunk]:
    """Stream responses from Deepseek models, filtering out <think> tags.

    Args:
        llm: The language model to stream from
        messages: The messages to send to the model

    Yields:
        Filtered chunks from the model response
    """
    # check if the model is deepseek based
    if (llm.name and model_target not in llm.name) or (hasattr(llm, "model") and model_target not in llm.model):
        async for chunk in llm.astream(messages, config=config, **kwargs):
            yield chunk
        return

    # Yield with a buffer, upon completing the <think></think> tags, move them to the reasoning content and start over
    buffer = ""
    async for chunk in llm.astream(messages, config=config, **kwargs):
        # start or append
        if not buffer:
            buffer = chunk.content
        else:
            buffer += chunk.content if hasattr(chunk, "content") else chunk

        # Process buffer to remove <think> tags
        if "<think>" in buffer or "</think>" in buffer:
            if hasattr(chunk, "tool_calls") and chunk.tool_calls:
                raise NotImplementedError("tool calls during reasoning should be removed?")
            if "<think>" in chunk.content or "</think>" in chunk.content:
                continue
            chunk.additional_kwargs["reasoning_content"] = chunk.content
            chunk.content = ""
        # upon block completion, reset the buffer
        if "<think>" in buffer and "</think>" in buffer:
            buffer = ""
        yield chunk

```

# Issue
Integrating reasoning models (e.g. deepseek-r1) into existing LangChain
based workflows is hard due to the thinking blocks that are included in
the message contents. To avoid this, we could match the `ChatOllama`
integration with `ChatDeepseek` to return the reasoning content inside
`message.additional_arguments.reasoning_content` instead.

# Dependenices
None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-21 15:44:54 +00:00
ccurme
d8145dda95 xai: release 0.2.2 (#30403) 2025-03-20 20:25:16 +00:00
ccurme
e194902994 mistral: release 0.2.9 (#30402) 2025-03-20 20:22:24 +00:00
ccurme
49466ec9ca groq: release 0.3.1 (#30401) 2025-03-20 20:19:49 +00:00
ccurme
db1e340387 fireworks: release 0.2.8 (#30400) 2025-03-20 16:15:51 -04:00
ccurme
238f7fb345 docs: add links in Writer provider page (#30399) 2025-03-20 16:13:48 -04:00
ccurme
785a8e7d45 tests: release 0.3.15 (#30397) 2025-03-20 15:38:40 -04:00
ccurme
5588ca4cfb core: release 0.3.47 (#30396) 2025-03-20 18:52:53 +00:00
ccurme
de3960d285 multiple: enforce standards on tool_choice (#30372)
- Test if models support forcing tool calls via `tool_choice`. If they
do, they should support
  - `"any"` to specify any tool
  - the tool name as a string to force calling a particular tool
- Add `tool_choice` to signature of `BaseChatModel.bind_tools` in core
- Deprecate `tool_choice_value` in standard tests in favor of a boolean
`has_tool_choice`

Will follow up with PRs in external repos (tested in AWS and Google
already).
2025-03-20 17:48:59 +00:00
ccurme
b86cd8270c multiple: support strict and method in with_structured_output (#30385) 2025-03-20 13:17:07 -04:00
Mohammad Mohtashim
1103bdfaf1 (Ollama) Fix String Value parsing in _parse_arguments_from_tool_call (#30154)
- **Description:** Fix String Value parsing in
_parse_arguments_from_tool_call
- **Issue:** #30145

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-19 21:47:18 -04:00
Daniel Liden
c0ffc9aa29 Update MLflow integration docs with concise examples and external links (#30082)
- **Description:** This PR updates the [MLflow
integration](https://python.langchain.com/docs/integrations/providers/mlflow_tracking/)
docs. This PR is based on feedback and suggestions from @efriis on
#29612 . This proposed revision is much shorter, does not contain
images, and links out to the MLflow docs rather than providing lengthy
descriptions directly within these docs. Thank you for taking another
look!

- **Issue:** NA
- **Dependencies:** NA

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-20 00:25:10 +00:00
Tim König
b5992695ae community: add ZoteroRetriever (#30270)
**Description** 
This contribution adds a retriever for the Zotero API.
[Zotero](https://www.zotero.org/) is an open source reference management
for bibliographic data and related research materials. A retriever will
allow langchain applications to retrieve relevant documents from
personal or shared group libraries, which I believe will be helpful for
numerous applications, such as RAG systems, personal research
assistants, etc. Tests and docs were added.

The documentation provided assumes the retriever will be part of the
langchain-community package, as this seemed customary. Please let me
know if this is not the preferred way to do it. I also uploaded the
implementation to PyPI.

**Dependencies**
The retriever requires the `pyzotero` package for API access. This
dependency is stated in the docs, and the retriever will return an error
if the package is not found. However, this dependency is not added to
the langchain package itself.

**Twitter handle**
I'm no longer using Twitter, but I'd appreciate a shoutout on
[Bluesky](https://bsky.app/profile/koenigt.bsky.social) or
[LinkedIn](https://www.linkedin.com/in/dr-tim-k%C3%B6nig-534aa2324/)!


Let me know if there are any issues, I'll gladly try and sort them out!

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-19 20:19:32 -04:00
ccurme
aa5ac9279a docs: update tavily guides (#30387)
AgentExecutor -> langgraph
2025-03-19 19:29:57 -04:00
pulvedu
4346aca5cf Integration update (#30381)
This pull request includes a change to the following
- docs/docs/integrations/tools/tavily_search.ipynb 
- docs/docs/integrations/tools/tavily_extract.ipynb
- added docs/docs/integrations/providers/tavily.mdx

---------

Co-authored-by: pulvedu <dustin@tavily.com>
2025-03-19 17:58:25 -04:00
Daniel Rauber
9b687d7fbd community[minor]: PlaywrightURLLoader can take stored session file (#30152)
**Description:**
Implements an additional `browser_session` parameter on
PlaywrightURLLoader which can be used to initialize the browser context
by providing a stored playwright context.
2025-03-19 16:29:07 -04:00
Ikko Eltociear Ashimine
bffa530816 docs: update contextual.ipynb (#30384)
intialize -> initialize
2025-03-19 15:48:58 -04:00
Yeonseolee
65b16d3200 Docs: Fix deprecated initialize agent in ainetwork (#30355)
## Description
- Replaced `initialize_agent`, `AgentType` usage in ainetwork
integration
- Updated usage example to `create_react_agent` in langgraph

## Issue
- #29277

## Dependencies
- N/A

## Twitter handler
- I don't use Twitter
2025-03-19 15:20:21 -04:00
Vadym Barda
73c04f4707 core[patch]: release 0.3.46 (#30383) 2025-03-19 15:09:08 -04:00
William FH
ce84f8ba7e Dereference run tree (#30377) 2025-03-19 19:05:06 +00:00
William FH
8265be4d3e Unset context to None in var (#30380) 2025-03-19 18:53:17 +00:00
William FH
4130e6476b Unset context after step (#30378)
While we are already careful to copy before setting the config, if other
objects hold a reference to the config or context, it wouldn't be
cleared.
2025-03-19 11:46:23 -07:00
Vadym Barda
37190881d3 core[patch]: add util for approximate token counting (#30373) 2025-03-19 17:48:38 +00:00
Brandon Luu
5ede4248ef docs: Update Vector Store docs formatting (#30359)
Description: Fix formatting in Vector Stores docs.

- astradb: fix API ref spacing
- milvus, pgvector, pinecone, qdrant: removed % in cmds for docs
consistency
- pgvector: removed redundant code and reorganized imports

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-19 15:54:18 +00:00
Matthew Farrellee
5f812f5968 langchain-tests: skip instead of passing image message tests (#30375)
**Description:** use skip for image message tests
2025-03-19 15:35:32 +00:00
ccurme
aae8306d6c groq: release 0.3.0 (#30374) 2025-03-19 15:23:30 +00:00
Ashwin
83cfb9691f Fix typo: change 'ben' to 'be' in comment (#30358)
**Description:**  
This PR fixes a minor typo in the comments within
`libs/partners/openai/langchain_openai/chat_models/base.py`. The word
"ben" has been corrected to "be" for clarity and professionalism.

**Issue:**  
N/A

**Dependencies:**  
None
2025-03-19 10:35:35 -04:00
pudongair
4d1d726e61 docs: fix some typos (#30367)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Signed-off-by: pudongair <744355276@qq.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-19 13:26:07 +00:00
Florian Chappaz
07cb41ea9e community: aligning ChatLiteLLM default parameters with litellm (#30360)
**Description:**
Since `ChatLiteLLM` is forwarding most parameters to
`litellm.completion(...)`, there is no reason to set other default
values than the ones defined by `litellm`.

In the case of parameter 'n', it also provokes an issue when trying to
call a serverless endpoint on Azure, as it is considered an extra
parameter. So we need to keep it optional.

We can debate about backward compatibility of this change: in my
opinion, there should not be big issues since from my experience,
calling `litellm.completion()` without these parameters works fine.

**Issue:** 
- #29679 

**Dependencies:** None
2025-03-19 09:07:28 -04:00
Hodory
57ffacadd0 community: add keep_newlines parameter to process_pages method (#30365)
- **Description:** Adding keep_newlines parameter to process_pages
method with page_ids on Confluence document loader
- **Issue:** N/A (This is an enhancement rather than a bug fix)
- **Dependencies:** N/A
- **Twitter handle:** N/A
2025-03-19 08:57:59 -04:00
ccurme
0ba03d8f3a Revert "docs: Refactored AWS Lambda Tool to Use AgentExecutor instead of initialize agent " (#30357)
Reverts langchain-ai/langchain#30267

Code is broken.
2025-03-19 03:17:47 +00:00
William FH
f5a0092551 Rm test for parent_run presence (#30356) 2025-03-18 19:44:19 -07:00
Mark Perfect
38b48d257d docs: Fix Qdrant sparse and hybrid vector search (#30208)
- [x] **PR title**


- [x] **PR message**:
- **Description:** Updated the sparse and hybrid vector search due to
changes in the Qdrant API, and cleaned up the notebook
  

- [x] **Add tests and docs**:
    - N/A


- [x] **Lint and test**

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Co-authored-by: Mark Perfect <mark.anthony.perfect1@gmail.com>
2025-03-18 22:44:12 -04:00
Adam Brenner
f949d9a3d3 docs: Add Dell PowerScale Document Loader (#30209)
# Description
Adds documentation on LangChain website for a Dell specific document
loader for on-prem storage devices. Additional details on what the
document loader is described in the PR as well as on our github repo:
[https://github.com/dell/powerscale-rag-connector](https://github.com/dell/powerscale-rag-connector)

This PR also creates a category on the document loader webpage as no
existing category exists for on-prem. This follows the existing pattern
already established as the website has a category for cloud providers.

# Issue:
New release, no issue.

# Dependencies:

None

# Twitter handle:

DellTech

---------

Signed-off-by: Adam Brenner <adam@aeb.io>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-18 22:39:21 -04:00
ccurme
9fb0db6937 community: release 0.3.20 (#30354) 2025-03-18 21:57:12 +00:00
ccurme
168f1dfd93 langchain[patch]: update text-splitters min bound (#30352) 2025-03-18 20:53:43 +00:00
ccurme
f6cf2ce2ad langchain[patch]: lock with latest text-splitters (#30350) 2025-03-18 19:29:11 +00:00
ccurme
2909b49045 langchain: release 0.3.21 (#30348) 2025-03-18 19:13:20 +00:00
ccurme
958f85d541 text-splitters: release 0.3.7 (#30347) 2025-03-18 19:11:37 +00:00
Aniket kadukar
36412c02b6 docs: Fix typo "tall" → "tool" in tools_human.ipynb (#30345)
This PR fixes a minor typo.

The word "tall" was mistakenly used instead of "tool." 

I have corrected it to "tool" for better clarity and accuracy.
2025-03-18 13:12:56 -04:00
Lance Martin
46d6bf0330 ollama[minor]: update default method for structured output (#30273)
From function calling to Ollama's [dedicated structured output
feature](https://ollama.com/blog/structured-outputs).

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-18 12:44:22 -04:00
Marlene
ff8ce60dcc Core: Adding Azure AI to Supported Chat Models (#30342)
- **Description:** I was testing out `init_chat` and saw that chat
models can now be inferred. Azure OpenAI is currently only supported but
we would like to add support for Azure AI which is a different package.
This PR edits the `base.py` file to add the chat implementation.
- I don't think this adds any additional dependencies 
- Will add a test and lint, but starting an initial draft PR. 

cc @santiagxf

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-18 11:53:20 -04:00
TheSongg
251551ccf1 doc: Implement langchain-xinference (#30296)
- [ ] **PR title**: Implement langchain-xinference

- [ ] **PR message**: 
Implement a standalone package for Xinference chat models and llm
models.

https://github.com/langchain-ai/langchain/issues/30045#issue-2887214214
2025-03-18 11:50:16 -04:00
Oskar Stark
492b4c1604 docs(readthedocs): streamline config (#30307) 2025-03-18 11:47:45 -04:00
wenmeng zhou
5a6e1254a7 support return reasoning content for models like qwq in dashscope (#30317)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

here is an example
```python
from langchain_community.chat_models.tongyi import ChatTongyi
from langchain_core.messages import HumanMessage

chatLLM = ChatTongyi(
    model="qwq-32b",   # refer to  https://help.aliyun.com/zh/model-studio/getting-started/models for more models
)
res = chatLLM.stream([HumanMessage(content="how much is 1 plus 1")])
for r in res:
    print(r)
```

```shell
content='' additional_kwargs={'reasoning_content': 'Okay, so the'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' user is asking "'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': 'how much is 1 plus'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1." Let me think'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' about this. Hmm'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', 1 plus'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " 1... That's a pretty"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' basic math question. I'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' remember from arithmetic that when'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' you add 1 and'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 together, the'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' result is 2.'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' But wait, maybe'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' I should double-check to be'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' sure. Let me visualize it'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '. If I have one apple'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' and someone gives me another'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' apple, I have'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' two apples total. Yeah,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' that makes sense. Or'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' on a number line'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', starting at 1 and'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' moving 1 step forward lands'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' you at 2'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '. \n\nIs there any'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' context where 1 +'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 might not equal'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 2? Like in different'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' number bases? Let'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'s see. In base"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 10, which'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' is standard,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1+1 is'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 2. But if'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' we were in binary'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' (base 2'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '), 1 +'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 would be 1'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '0. But the question'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " doesn't specify a base,"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' so I think the'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' default is base 10'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '. \n\nAlternatively, could'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' this be a trick'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' question? Maybe they'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'re referring to something else"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', like in Boolean'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' algebra where 1 +'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 might still'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' be 1 in'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' some contexts? Wait'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', no, in Boolean'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' addition, 1'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' + 1 is typically'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " 1 because it's logical"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' OR. But the'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' question just says "1'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' plus 1," which is'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' more arithmetic than Boolean.'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' \n\nOr maybe in some other'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' mathematical structure like modular arithmetic?'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' For example, modulo'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 2,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 + 1 is'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 0. But again'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', unless specified, it'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'s probably standard addition"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': '. \n\nThe user might be'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' testing if I know basic'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' math, or maybe'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " they're a student just"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' starting out. Either way,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' the straightforward answer is'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 2. I should also'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " consider if there's any cultural"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' references or jokes where'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 + 1 equals'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' something else, but I can'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'t think of any common"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' ones. \n\nAlternatively'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', in some contexts like'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' in chemistry,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' 1 + 1 could refer'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' to mixing solutions, but that'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'s not standard. The question"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' is pretty simple,'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' so I think the answer'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' is 2. To'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' be thorough, maybe mention'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' that in standard arithmetic it'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': "'s 2, but if"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': " there's a different"} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' context, the answer'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' might vary. But since'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' no context is given'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ', 2 is the safest'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ' answer.'} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='The result' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' of 1 plus' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' 1 is **2**.' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' \n\nIn standard arithmetic (base' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' 10), adding' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' 1 and 1 together' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' yields 2. This is' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' a fundamental mathematical principle. If' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' the question involves a different context' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' (e.g., binary' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=', modular arithmetic, or a' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' metaphorical meaning), it' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' would need clarification,' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' but under typical circumstances, the' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content=' answer is **2**.' additional_kwargs={'reasoning_content': ''} response_metadata={} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'
content='' additional_kwargs={'reasoning_content': ''} response_metadata={'finish_reason': 'stop', 'request_id': '4738c641-6bd8-9efc-a4fe-d929d4e62bef', 'token_usage': {'input_tokens': 16, 'output_tokens': 560, 'total_tokens': 576}} id='run-bd026918-16e5-429f-aa75-3ff7701e9f8d'

```

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-18 11:43:10 -04:00
ccurme
b91daf06eb groq[minor]: remove default model (#30341)
The default model for `ChatGroq`, `"mixtral-8x7b-32768"`, is being
retired on March 20, 2025. Here we remove the default, such that model
names must be explicitly specified (being explicit is a good practice
here, and avoids the need for breaking changes down the line). This
change will be released in a minor version bump to 0.3.

This follows https://github.com/langchain-ai/langchain/pull/30161
(released in version 0.2.5), where we began generating warnings to this
effect.

![Screenshot 2025-03-18 at 10 33
27 AM](https://github.com/user-attachments/assets/f1e4b302-c62a-43b0-aa86-eaf9271e86cb)
2025-03-18 10:50:34 -04:00
amuwall
f6a17fbc56 community: fix import exception too constrictive (#30218)
Fix this issue #30097
2025-03-17 22:09:02 -04:00
Aryan Agarwal
7ff7c4f81b docs: Refactored AWS Lambda Tool to Use AgentExecutor instead of initialize agent (#30267)
## Description:
- Removed deprecated `initialize_agent()` usage in AWS Lambda
integration.
- Replaced it with `AgentExecutor` for compatibility with LangChain
v0.3.
- Fixed documentation linting errors.

## Issue:
- No specific issue linked, but this resolves the use of deprecated
agent initialization.

## Dependencies:
- No new dependencies added.

## Request for Review:
- Please verify if the implementation is correct.
- If approved and merged, I will proceed with updating other related
files.

## Twitter Handle (Optional):
I don't have a Twitter but here is my LinkedIn instead
(https://www.linkedin.com/in/aryan1227/)
2025-03-17 22:04:13 -04:00
qonnop
036f00dc92 community: support in-memory data (Blob.from_data) in all audio parsers (#30262)
OpenAIWhisperParser, OpenAIWhisperParserLocal, YandexSTTParser do not
handle in-memory audio data (loaded via Blob.from_data) correctly. They
require Blob.path to be set and AudioSegment is always read from the
file system. In-memory data is handled correctly only for
FasterWhisperParser so far. I changed OpenAIWhisperParser,
OpenAIWhisperParserLocal, YandexSTTParser accordingly to match
FasterWhisperParser.
Thanks for reviewing the PR!

Co-authored-by: qonnop <qonnop@users.noreply.github.com>
2025-03-17 19:52:33 -04:00
Ke Liu
98a9ef19ec fix typo (#30298)
fix typo in "environment"
2025-03-17 23:37:43 +00:00
Matthew Farrellee
1985aaf095 langchain-tests: allow subclasses to add addition, non-standard tests (#30204)
**description:** the ChatModel[Integration]Tests classes are powerful
and helpful, this change allows sub-classes to add additional tests.

for instance,

```
class TestChatMyServiceIntegration(ChatModelIntegrationTests):
    ...
    def test_myservice(self, model: BaseChatModel) -> None:
        ...
```

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-17 23:37:16 +00:00
Ben
789db7398b text-splitters: Add JSFrameworkTextSplitter for Handling JavaScript Framework Code (#28972)
## Description
This pull request introduces a new text splitter,
`JSFrameworkTextSplitter`, to the Langchain library. The
`JSFrameworkTextSplitter` extends the `RecursiveCharacterTextSplitter`
to handle JavaScript framework code effectively, including React (JSX),
Vue, and Svelte. It identifies and utilizes framework-specific component
tags and syntax elements as splitting points, alongside standard
JavaScript syntax. This ensures that code is divided at natural
boundaries, enhancing the parsing and processing of JavaScript and
framework-specific code.

### Key Features
- Supports React (JSX), Vue, and Svelte frameworks.
- Identifies and uses framework-specific tags and syntax elements as
natural splitting points.
- Extends the existing `RecursiveCharacterTextSplitter` for seamless
integration.

## Issue
No specific issue addressed.

## Dependencies
No additional dependencies required.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-17 23:32:33 +00:00
Bagatur
9b48e4c2b0 docs: update langsmith env vars (#30331) 2025-03-17 14:35:22 -07:00
ccurme
54eab796ab docs: update chat model tabs (#30330) 2025-03-17 15:39:10 -04:00
Oskar Stark
4c68749e38 docs(openai): use bold over ticks (#30303)
We are not talking about code here
2025-03-17 18:53:16 +00:00
Oskar Stark
531319f65b infra(GHA): description is required based on schema definition (#30305) 2025-03-17 18:42:42 +00:00
Oskar Stark
192035f8c0 docs: fix typo (#30310) 2025-03-17 16:54:32 +00:00
César García
a5eca20e1b docs: Fix for broken links for Docling project sites (#30313)
**Description:**: Updated Docling project URLs from ds4sd.github.io to
docling-project.github.io/docling/
 **Issue:** #30312
 **Dependencies:** None
 **Twitter handle**: @lahoramaker
2025-03-17 16:47:09 +00:00
Oskar Stark
620d723fbf infra(GHA): remove unused --- at the beginning of some workflows (#30306) 2025-03-17 16:46:32 +00:00
César García
2c8b8114fa docs: Updated link to current Unstructured docs (#30316)
The former link led to a site that explains that the docs have moved,
but did not redirect the user to the actual site automatically. I just
copied the provided url, checked that it works and updated the link to
the current version.


**Description:** Updated the link to Unstructured Docs at
https://docs.unstructured.io
**Issue:** #30315 
**Dependencies:** None
**Twitter handle:** @lahoramaker
2025-03-17 16:46:28 +00:00
ccurme
e2ab4ccab3 docs: update min langchain-openai version in integrations page (#30326)
No longer RC
2025-03-17 16:45:47 +00:00
ccurme
5684653775 openai[patch]: release 0.3.9 (#30325) 2025-03-17 16:08:41 +00:00
ccurme
eb9b992aa6 openai[patch]: support additional Responses API features (#30322)
- Include response headers
- Max tokens
- Reasoning effort
- Fix bug with structured output / strict
- Fix bug with simultaneous tool calling + structured output
2025-03-17 12:02:21 -04:00
Bae-ChangHyun
d8510270ee community: add 'extract' mode to FireCrawlLoader for structured data extraction (#30242)
**Description:** 
Added an 'extract' mode to FireCrawlLoader that enables structured data
extraction from web pages. This feature allows users to Extract
structured data from a single URLs, or entire websites using Large
Language Models (LLMs).
You can show more params and usage on [firecrawl
docs](https://docs.firecrawl.dev/features/extract-beta).
You can extract from only one url now.(it depends on firecrawl's extract
method)

**Dependencies:** 
No new dependencies required. Uses existing FireCrawl API capabilities.

---------

Co-authored-by: chbae <chbae@gcsc.co.kr>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-17 15:15:57 +00:00
qonnop
747efa16ec community: fix CPU support for FasterWhisperParser (implicit compute type for WhisperModel) (#30263)
FasterWhisperParser fails on a machine without an NVIDIA GPU: "Requested
float16 compute type, but the target device or backend do not support
efficient float16 computation." This problem arises because the
WhisperModel is called with compute_type="float16", which works only for
NVIDIA GPU.

According to the [CTranslate2
docs](https://opennmt.net/CTranslate2/quantization.html#bit-floating-points-float16)
float16 is supported only on NVIDIA GPUs. Removing the compute_type
parameter solves the problem for CPUs. According to the [CTranslate2
docs](https://opennmt.net/CTranslate2/quantization.html#quantize-on-model-loading)
setting compute_type to "default" (standard when omitting the parameter)
uses the original compute type of the model or performs implicit
conversion for the specific computation device (GPU or CPU). I suggest
to remove compute_type="float16".

@hulitaitai you are the original author of the FasterWhisperParser - is
there a reason for setting the parameter to float16?

Thanks for reviewing the PR!

Co-authored-by: qonnop <qonnop@users.noreply.github.com>
2025-03-14 22:22:29 -04:00
ccurme
c74e7b997d openai[patch]: support structured output via Responses API (#30265)
Also runs all standard tests using Responses API.
2025-03-14 15:14:23 -04:00
Priyansh Agrawal
f54f14b747 community: cube document loader - do not load non-public dimensions and measures (#30286)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- **Description:** Do not load non-public dimensions and measures
(public: false) with Cube semantic loader

- **Issue:** Currently, non-public dimensions and measures are loaded by
the Cube document loader which leads to downstream applications using
these which is not allowed by Cube.


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-14 15:07:56 -04:00
Stavros Kontopoulos
ac22cde130 langchain_ollama: Support keep_alive in embeddings (#30251)
- Description: Adds support for keep_alive in Ollama Embeddings see
https://github.com/ollama/ollama/issues/6401.
Builds on top of of
https://github.com/langchain-ai/langchain/pull/29296. I have this use
case where I want to keep the embeddings model in cpu forever.
- Dependencies: no deps are being introduced.
- Issue: haven't created an issue yet.
2025-03-14 14:56:50 -04:00
Matthew Farrellee
65a8f30729 docs: update json-mode docs to use with_structured_output(method="json_mode") (#30291)
**Description:** update the json-mode concepts doc to use
method="json_mode" instead of model_kwargs w/ response_format
**Issue:** https://github.com/langchain-ai/langchain/issues/30290
2025-03-14 14:54:57 -04:00
ccurme
18f9b5d8ab docs: update contributing doc (#30292) 2025-03-14 18:54:11 +00:00
homeffjy
2c99f12062 community[patch]: fix bilibili loader handling of multi-page content (#30283)
Previously the loader would only extract subtitles from the first page
of multi-page videos.
2025-03-14 14:53:03 -04:00
ccurme
0b80bec015 docs: fix typo (#30288) 2025-03-14 17:09:38 +00:00
ccurme
d5d0134e7b anthropic: release 0.3.10 (#30287) 2025-03-14 16:23:21 +00:00
ccurme
226f29bc96 anthropic: support built-in tools, improve docs (#30274)
- Support features from recent update:
https://www.anthropic.com/news/token-saving-updates (mostly adding
support for built-in tools in `bind_tools`
- Add documentation around prompt caching, token-efficient tool use, and
built-in tools.
2025-03-14 16:18:50 +00:00
Priyansh Agrawal
f27e2d7ce7 community: cube document loader - fix logging (#30285)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- **Description:** Fix bad log message on line#56 and replace f-string
logs with format specifiers

- **Issue:** Log messages such as this one
`INFO:langchain_community.document_loaders.cube_semantic:Loading
dimension values for: {dimension_name}...`

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-14 11:36:18 -04:00
ccurme
bbd4b36d76 mistralai[patch]: bump core (#30278) 2025-03-13 23:04:36 +00:00
ccurme
315bb17ef5 core: release 0.3.45 (#30277) 2025-03-13 22:44:23 +00:00
pulvedu
d0bfc7f820 community[fix] : Pass API_KEY as argument (#30272)
PR Title:
community: Fix Pass API_KEY as argument

PR Message:
Description:
This PR fixes validation error "Value error, Did not find
tavily_api_key, please add an environment variable `TAVILY_API_KEY`
which contains it, or pass `tavily_api_key` as a named parameter."

Dependencies:
No new dependencies introduced.

---------

Co-authored-by: pulvedu <dustin@tavily.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-13 22:19:31 +00:00
ccurme
5e0fa2cce5 infra: update release pipeline (#30276)
Instead of attempting to conditionally `needs` job, always run job and
exit successfully if not needed.
2025-03-13 18:10:59 -04:00
ccurme
733abcc884 mistral: release 0.2.8 (#30275) 2025-03-13 21:54:34 +00:00
Jacob Lee
e9c1765967 fix(core): Ignore missing secrets on deserialization (#30252) 2025-03-13 12:27:03 -07:00
ccurme
ebea5e014d standard tests: test simple agent loop (#30268) 2025-03-13 16:34:12 +00:00
ccurme
5237987643 docs: update readme (#30239)
Co-authored-by: Vadym Barda <vadym@langchain.dev>
2025-03-12 13:45:13 -04:00
ccurme
cd1ea8e94d openai[patch]: support Responses API (#30231)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-03-12 12:25:46 -04:00
Jason Zhang
49bdd3b6fe docs: Add AgentQL provider doc, tool/toolkit doc and documentloader doc (#30144)
- **Description:** Added AgentQL docs for the provider page, tools page
and documentloader page
- **Twitter handle:** @AgentQL

Repo:
https://github.com/tinyfish-io/agentql-integrations/tree/main/langchain
PyPI: https://pypi.org/project/langchain-agentql/

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-11 21:57:40 -04:00
Vadym Barda
23fa70f328 core[patch]: release 0.3.44 (#30236) 2025-03-11 18:59:02 -04:00
Vadym Barda
c7842730ef core[patch]: support single-node subgraphs and put subgraph nodes under the respective subgraphs (#30234) 2025-03-11 18:55:45 -04:00
Dharshan A
81d1653a30 docs: Fix typo in Generating Examples section of few-shot prompting doc (#30219)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-11 09:44:20 -04:00
ccurme
27d86d7bc8 infra: update release workflow (#30207)
Fix condition
2025-03-10 17:53:03 -04:00
ccurme
70fc0b8363 infra: update release workflow (#30203) 2025-03-10 20:18:33 +00:00
ccurme
62c570dd77 standard-tests, openai: bump core (#30202) 2025-03-10 19:22:24 +00:00
ccurme
38420ee76e docs: add note on Deepseek R1 (#30201) 2025-03-10 15:17:20 -04:00
ccurme
f896e701eb deepseek: install local langchain-tests in test deps (#30198) 2025-03-10 16:58:17 +00:00
ccurme
7b8f266039 infra: additional testing on core release (#30180)
Here we add a job to the release workflow that, when releasing
`langchain-core`, tests prior published versions of select packages
against the new version of core. We limit the testing to the most recent
published versions of langchain-anthropic and langchain-openai.

This is designed to catch backward-incompatible updates to core. We
sometimes update core and downstream packages simultaneously, so there
may not be any commit in the history at which tests would fail. So
although core and latest downstream packages could be consistent, we can
benefit from testing prior versions of downstream packages against core.

I tested the workflow by simulating a [breaking
change](d7287248cf)
in core and running it with publishing steps disabled:
https://github.com/langchain-ai/langchain/actions/runs/13741876345. The
workflow correctly caught the issue.
2025-03-10 08:59:59 -04:00
Hugh Gao
aa6dae4a5b community: Remove the system message count limit for ChatTongyi. (#30192)
## Description
The models in DashScope support multiple SystemMessage. Here is the
[Doc](https://bailian.console.aliyun.com/model_experience_center/text#/model-market/detail/qwen-long?tabKey=sdk),
and the example code on the document page:
```python
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("DASHSCOPE_API_KEY"),  # 如果您没有配置环境变量,请在此处替换您的API-KEY
    base_url="https://dashscope.aliyuncs.com/compatible-mode/v1",  # 填写DashScope服务base_url
)
# 初始化messages列表
completion = client.chat.completions.create(
    model="qwen-long",
    messages=[
        {'role': 'system', 'content': 'You are a helpful assistant.'},
        # 请将 'file-fe-xxx'替换为您实际对话场景所使用的 file-id。
        {'role': 'system', 'content': 'fileid://file-fe-xxx'},
        {'role': 'user', 'content': '这篇文章讲了什么?'}
    ],
    stream=True,
    stream_options={"include_usage": True}
)

full_content = ""
for chunk in completion:
    if chunk.choices and chunk.choices[0].delta.content:
        # 拼接输出内容
        full_content += chunk.choices[0].delta.content
        print(chunk.model_dump())

print({full_content})
```
Tip: The example code is for OpenAI, but the document said that it also
supports the DataScope API, and I tested it, and it works.
```
Is the Dashscope SDK invocation method compatible?

Yes, the Dashscope SDK remains compatible for model invocation. However, file uploads and file-ID retrieval are currently only supported via the OpenAI SDK. The file-ID obtained through this method is also compatible with Dashscope for model invocation.
```
2025-03-10 08:58:40 -04:00
Dharshan A
34e94755af Fix typo in astream_events in streaming docs (#30195)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-10 08:56:07 -04:00
ccurme
67aff1648b community: Add OpenGradient integration (Toolkit) (#30190)
Commandeering https://github.com/langchain-ai/langchain/pull/30135

---------

Co-authored-by: kylexqian <kylexqian@gmail.com>
2025-03-09 18:08:07 -04:00
ccurme
b209d46eb3 mistral[patch]: set global ssl context (#30189) 2025-03-09 21:27:41 +00:00
Vijay Selvaraj
df459d0d5e community: add Valthera integration (#30105)
```markdown
**Description:**  
This PR integrates Valthera into LangChain, introducing an framework designed to send highly personalized nudges by an LLM agent. This is modeled after Dr. BJ Fogg's Behavior Model. This integration includes:

- Custom data connectors for HubSpot, PostHog, and Snowflake.
- A unified data aggregator that consolidates user data.
- Scoring configurations to compute motivation and ability scores.
- A reasoning engine that determines the appropriate user action.
- A trigger generator to create personalized messages for user engagement.

**Issue:**  
N/A

**Dependencies:**  
N/A

**Twitter handle:**  
- `@vselvarajijay`

**Tests and Docs:**  
- `docs/docs/integrations/tools/valthera` 
- `https://github.com/valthera/langchain-valthera/tree/main/tests`

```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 21:19:08 +00:00
ccurme
3823daa0b9 cli: update integration doc template for tools (#30188)
Chain example -> langgraph agent
2025-03-09 21:14:43 +00:00
David Skarbrevik
0d7cdf290b langchain: clean pyproject ruff section (#30070)
## Changes
- `/Makefile` - added extra step to `make format` and `make lint` to
ensure the lint dep-group is installed before running ruff (documented
in issue #30069)

- `/pyproject.toml` - removed ruff exceptions for files that no longer
exist or no longer create formatting/linting errors in ruff

## Testing

**running `make format` on this branch/PR**
<img width="435" alt="image"
src="https://github.com/user-attachments/assets/82751788-f44e-4591-98ed-95ce893ce623"
/>

## Issue

fixes #30069

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 15:06:02 -04:00
Jonathan Feng
911accf733 docs: add contextualai documentation (#30050)
Thank you for contributing to LangChain!
 
**Description:** adds ContextualAI's `langchain-contextual` package's
documentation

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-09 02:43:13 +00:00
Bharat
b9746a6910 fixes#30182: update tool names to match OpenAI function name pattern (#30183)
The OpenAI API requires function names to match the pattern
'^[a-zA-Z0-9_-]+$'. This updates the JIRA toolkit's tool names to use
underscores instead of spaces to comply with this requirement and
prevent BadRequestError when using the tools with OpenAI functions.

Error fixed:
```
File "langgraph-bug-fix/.venv/lib/python3.13/site-packages/openai/_base_client.py", line 1023, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'tools[0].function.name': string does not match pattern. Expected a string that matches the pattern '^[a-zA-Z0-9_-]+$'.", 'type': 'invalid_request_error', 'param': 'tools[0].function.name', 'code': 'invalid_value'}}
During task with name 'agent' and id 'aedd7537-e8d5-6678-d0c5-98129586d3ac'
```

Issue:#30182
2025-03-08 20:48:25 -05:00
ccurme
cee0fecb08 docs: update package registry counts (#30181) 2025-03-08 20:37:59 -05:00
William FH
bac3a28e70 Flush (#30157) 2025-03-07 16:32:15 -08:00
ccurme
a7ab5e8372 community[patch]: ChatPerplexity: track usage metadata (#30175) 2025-03-07 23:25:05 +00:00
Vadym Barda
6c05d4b153 docs[patch]: update trim messages wording (#30174) 2025-03-07 17:05:51 -05:00
ccurme
1c993b921c core[patch]: release 0.3.43 (#30173) 2025-03-07 21:56:00 +00:00
ccurme
9893e5cb80 core[patch]: catch structured_output_format (#30172)
Change to `ls_structured_output_format` was not backward-compatible with
older versions of integration packages.
2025-03-07 16:50:06 -05:00
ccurme
88dc479c4a docs: update model used in ChatGroq (#30170)
`mixtral-8x7b-32768` is being retired on March 20.
2025-03-07 16:29:05 -05:00
ccurme
33a3510243 core[patch]: export ArgsSchema (#30169)
This is needed for type hints

see: https://github.com/langchain-ai/langchain/pull/30167
2025-03-07 20:43:05 +00:00
OysterMax
01317fde21 DOC: type checker complain on args_schema type hint when inheriting from BaseTool (#30167)
Thank you for contributing to LangChain!

- **Description:** update docs to suppress type checker complain on
args_schema type hint when inheriting from BaseTool
- **Issue:** #30142 
- **Dependencies:** N/A
- **Twitter handle:** N/A

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-07 15:41:53 -05:00
ccurme
17507c9ba6 groq[patch]: release 0.2.5 (#30168) 2025-03-07 20:25:51 +00:00
andyzhou1982
9e863c89d2 add JiebaLinkExtractor for chinese doc extracting (#30150)
Thank you for contributing to LangChain!

- [ ] **PR title**: "community: chinese doc extracting"


- [ ] **PR message**: 
- **Description:** add jieba_link_extractor.py for chinese doc
extracting
    - **Dependencies:** jieba


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
  /doc/doc/integrations/providers/jieba.md
  /doc/doc/integrations/vectorstores/jieba_link_extractor.ipynb
  /libs/packages.yml

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-07 20:21:46 +00:00
ccurme
74e7772a5f groq[patch]: warn if model is not specified (#30161)
Groq is retiring `mixtral-8x7b-32768`, which is currently the default
model for ChatGroq, on March 20. Here we emit a warning if the model is
not specified explicitly.

A version 0.3.0 will be released ahead of March 20 that removes the
default altogether.
2025-03-07 15:21:13 -05:00
Ioannis Bakagiannis
3444e587ee docs: Integration Update - ADS4GPTs (#30153)
docs: New integration for LangChain - ads4gpts-langchain

Description: Tools and Toolkit for Agentic integration natively within
LangChain with ADS4GPTs, in order to help applications monetize with
advertising.

Twitter handle: @ads4gpts

Co-authored-by: knitlydevaccount <loom+github@knitly.app>
2025-03-07 14:35:44 -05:00
ccurme
3c258194ae tests[patch]: release 0.3.14 (#30165) 2025-03-07 18:34:05 +00:00
ccurme
34638ccfae openai[patch]: release 0.3.8 (#30164) 2025-03-07 18:26:40 +00:00
ccurme
4e5058f29c core[patch]: release 0.3.42 (#30163) 2025-03-07 18:14:45 +00:00
Eugene Yurtsev
894fd63a61 cli: release 0.0.36 (#30159)
Bump for 0.0.36
2025-03-07 13:05:40 -05:00
ccurme
806211475a core[patch]: update structured output tracing (#30123)
- Trace JSON schema in `options`
- Rename to `ls_structured_output_format`
2025-03-07 13:05:25 -05:00
Jakub Kopecký
d0f5bcda29 docs: fix apify actors notebok main heading text (#30040)
- **Description:** Fix Apify Actors tool notebook main heading text so
there is an actual description instead of "Overview" in the tool
integration description on [LangChain tools integration
page](https://python.langchain.com/docs/integrations/tools/#all-tools).
2025-03-07 12:58:10 -05:00
ccurme
230876a7c5 anthropic[patch]: add PDF input example to API reference (#30156) 2025-03-07 14:19:08 +00:00
ccurme
5c7440c201 docs: update configuration how-to guide (#30139) 2025-03-06 11:51:48 -05:00
joeconstantino
022ff9eead Tableau docs for new datasource qa tool (#30125)
- **Description: a notebook showing langchain and langraph agents using
the new langchain_tableau tool
- **Twitter handle: @joe_constantin0

---------

Co-authored-by: Joe Constantino <joe@constantino.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-06 14:58:56 +00:00
ccurme
52b0570bec core, openai, standard-tests: improve OpenAI compatibility with Anthropic content blocks (#30128)
- Support thinking blocks in core's `convert_to_openai_messages` (pass
through instead of error)
- Ignore thinking blocks in ChatOpenAI (instead of error)
- Support Anthropic-style image blocks in ChatOpenAI

---

Standard integration tests include a `supports_anthropic_inputs`
property which is currently enabled only for tests on `ChatAnthropic`.
This test enforces compatibility with message histories of the form:
```
- system message
- human message
- AI message with tool calls specified only through `tool_use` content blocks
- human message containing `tool_result` and an additional `text` block
```
It additionally checks support for Anthropic-style image inputs if
`supports_image_inputs` is enabled.

Here we change this test, such that if you enable
`supports_anthropic_inputs`:
- You support AI messages with text and `tool_use` content blocks
- You support Anthropic-style image inputs (if `supports_image_inputs`
is enabled)
- You support thinking content blocks.

That is, we add a test case for thinking content blocks, but we also
remove the requirement of handling tool results within HumanMessages
(motivated by existing agent abstractions, which should all return
ToolMessage). We move that requirement to a ChatAnthropic-specific test.
2025-03-06 09:53:14 -05:00
Pat Patterson
b3dc66f7a3 community: fix AttributeError when creating LanceDB vectorstore (#30127)
**Description:**

This PR adds a call to `guard_import()` to fix an AttributeError raised
when creating LanceDB vectorstore instance with an existing LanceDB
table.

**Issue:**

This PR fixes issue #30124.

**Dependencies:**

No additional dependencies.

**Twitter handle:**

[@metadaddy](https://x.com/metadaddy), but I spend more time at
[@metadaddy.net](https://bsky.app/profile/metadaddy.net) these days.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-05 23:04:38 +00:00
Hugh Gao
9b7b8e4a1a community: make DashScope models support Partial Mode for text continuation. (#30108)
## Description
make DashScope models support Partial Mode for text continuation.

For text continuation in ChatTongYi, it supports text continuation with
a prefix by adding a "partial" argument in AIMessage. The document is
[Partial Mode
](https://help.aliyun.com/zh/model-studio/user-guide/partial-mode?spm=a2c4g.11186623.help-menu-2400256.d_1_0_0_8.211e5b77KMH5Pn&scm=20140722.H_2862210._.OR_help-T_cn~zh-V_1).
The API example is:
```py
import os
import dashscope

messages = [{
    "role": "user",
    "content": "请对“春天来了,大地”这句话进行续写,来表达春天的美好和作者的喜悦之情"
},
{
    "role": "assistant",
    "content": "春天来了,大地",
    "partial": True
}]
response = dashscope.Generation.call(
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    model='qwen-plus',
    messages=messages,
    result_format='message',  
)

print(response.output.choices[0].message.content)
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-05 16:22:14 +00:00
黑牛
f0153414d5 Add request_id field to improve request tracking and debugging (for Tongyi model) (#30110)
- **Description**: Added the request_id field to the check_response
function to improve request tracking and debugging, applicable for the
Tongyi model.
- **Issue**: None
- **Dependencies**: None
- **Twitter handle**: None

- **Add tests and docs**: None

- **Lint and test**: Ran `make format`, `make lint`, and `make test` to
ensure the code meets formatting and testing requirements.
2025-03-05 11:03:47 -05:00
Manthan Surkar
1ee8aceaee community: fix Jira API wrapper failing initialization with cloud param (#30117)
### **Description**  
Converts the boolean `jira_cloud` parameter in the Jira API Wrapper to a
string before initializing the Jira Client. Also adds tests for the
same.

### **Issue**  
[Jira API Wrapper
Bug](8abb65e138/libs/community/langchain_community/utilities/jira.py (L47))

```python
jira_cloud_str = get_from_dict_or_env(values, "jira_cloud", "JIRA_CLOUD")
jira_cloud = jira_cloud_str.lower() == "true"
```

The above code has a bug where the value of `"jira_cloud"` is a boolean.
If it is passed, calling `.lower()` on a boolean raises an error.
Additionally, `False` cannot be passed explicitly since
`get_from_dict_or_env` falls back to environment variables.

Relevant code in `langchain_core`:  

[Source](https://github.com/thesmallstar/langchain/blob/master/.venv/lib/python3.13/site-packages/langchain_core/utils/env.py#L46)

```python
if isinstance(key, str) and key in data and data[key]:  # Here, data[key] is False
```

This PR fixes both issues.

### **Twitter Handle**  
[Manthan Surkar](https://x.com/manthan_surkar)
2025-03-05 10:49:25 -05:00
Adrián Panella
c599ba47d5 core(mermaid): fix error when 3+ subgraph levels (#29970) 2025-03-04 13:27:49 -05:00
Alexander Henlein
417efa30a6 docs: add Taiga Tool integration docs (#30042)
This PR adds documentation for the langchain-taiga Tool integration,
including an example notebook at
'docs/docs/integrations/tools/taiga.ipynb' and updates to
'libs/packages.yml' to track the new package.

Issue:
N/A

Dependencies:
None

Twitter handle:
N/A

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-04 17:51:20 +00:00
Mathias Marciano
5f0102242a Fixed an issue with the OpenAI Assistant's 'retrieval' tool and adding support for the 'attachments' parameter (#30006)
PR Title:
langchain: add attachments support in OpenAIAssistantRunnable

PR Description:
This PR fixes an issue with the "retrieval" tool (internally named
"file_search") in the OpenAI Assistant by adding support for the
"attachments" parameter in the invoke method. This change allows files
to be linked to messages when they are inserted into threads, which is
essential for utilizing OpenAI's Retrieval Augmented Generation (RAG)
feature.

Issue:
N/A

Dependencies:
None

Twitter handle:
N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-04 17:34:11 +00:00
Philippe PRADOS
4710c1fa8c community[minor]: Fix regular expression in visualize and outlines modules. (#30002)
Fix invalid escape characteres
2025-03-04 12:23:48 -05:00
ccurme
577c0d0715 community[patch]: release 0.3.19 (#30104) 2025-03-04 16:12:03 +00:00
ccurme
ba5ddb218f anthropic[patch]: release 0.3.9 (#30103) 2025-03-04 10:53:55 -05:00
ccurme
9383a0536a tests[patch]: release 0.3.13 (#30102) 2025-03-04 10:53:43 -05:00
ccurme
fb16c25920 langchain[patch]: release 0.3.20 (#30101) 2025-03-04 15:47:27 +00:00
ccurme
692a68bf1c core[patch]: release 0.3.41 (#30100) 2025-03-04 15:08:57 +00:00
ccurme
484d945500 community[patch]: remove numpy cap for python < 3.12 (#30084) 2025-03-04 09:46:41 -05:00
Cheney Zhang
7eb6dde720 docs: refine milvus server description (#30071)
Document refinement: optimize milvus server description. The description
of "milvus standalone", and "milvus server" is confusing, so I clarify
it with a detailed description.

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2025-03-04 09:39:54 -05:00
ZhangShenao
8575d7491f [Doc] Improve api doc (#30073)
- Update api_doc for `BaseMessage`
- add static method decorator for `retry_runnable`
2025-03-04 09:39:07 -05:00
Antonio Pisani
9a11e0edcd docs:Add SWI-Prolog for langchain-prolog (#30081)
Some users have complained that t is not clear that SWI-Prolog must be
installed before installing langchain-prolog.
2025-03-04 09:12:47 -05:00
Samuel Dion-Girardeau
ccb64e9f4f docs: Fix typo in code samples for max_tokens_for_prompt (#30088)
- **Description:** Fix typo in code samples for max_tokens_for_prompt.
Code blocks had singular "token" but the method has plural "tokens".
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** N/A
2025-03-04 09:11:21 -05:00
ccurme
33354f984f docs: update contributing docs (#30064) 2025-03-01 17:36:35 -05:00
ccurme
c7cd666a17 docs: add to vercel overrides (#30063) 2025-03-01 17:21:15 -05:00
ArrayPD
c671d54c6f core: make with_alisteners() example workable. (#30059)
**Description:**

5 fix of example from function with_alisteners() in
libs/core/langchain_core/runnables/base.py
Replace incoherent example output with workable example's output.

1. SyntaxError: unterminated string literal
    print(f"on start callback starts at {format_t(time.time())}
    correct as
    print(f"on start callback starts at {format_t(time.time())}")

2. SyntaxError: unterminated string literal
    print(f"on end callback starts at {format_t(time.time())}
    correct as
    print(f"on end callback starts at {format_t(time.time())}")

3. NameError: name 'Runnable' is not defined
    Fix as
    from langchain_core.runnables import Runnable

4. NameError: name 'asyncio' is not defined
    Fix as
    import asyncio

5. NameError: name 'format_t' is not defined.
    Implement format_t() as
    from datetime import datetime, timezone

    def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
2025-03-01 15:39:02 -05:00
Chandra Nandan
eca8c5515d docs: sidebar-content-render (#30061) (#30062)
Thank you for contributing to LangChain!

- [x] **PR title**: "docs: added proper width to sidebar content"

- [x] **PR message**: added proper width to sidebar content
- **Description:** While accessing the [LangChain Python API
Reference](https://python.langchain.com/api_reference/index.html) the
sidebar content does not display correctly.
    - **Issue:** Follow-up to #30061
    - **Dependencies:** None
    - **Twitter handle:** https://x.com/implicitdefcnc


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-01 15:30:41 -05:00
cold-eye
7c175e3fda Update ascend.py (#30060)
add batch_size to fix oom when embed large amount texts

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-01 14:10:41 -05:00
ccurme
3b066dc005 anthropic[patch]: allow structured output when thinking is enabled (#30047)
Structured output will currently always raise a BadRequestError when
Claude 3.7 Sonnet's `thinking` is enabled, because we rely on forced
tool use for structured output and this feature is not supported when
`thinking` is enabled.

Here we:
- Emit a warning if `with_structured_output` is called when `thinking`
is enabled.
- Raise `OutputParserException` if no tool calls are generated.

This is arguably preferable to raising an error in all cases.

```python
from langchain_anthropic import ChatAnthropic
from pydantic import BaseModel


class Person(BaseModel):
    name: str
    age: int


llm = ChatAnthropic(
    model="claude-3-7-sonnet-latest",
    max_tokens=5_000,
    thinking={"type": "enabled", "budget_tokens": 2_000},
)
structured_llm = llm.with_structured_output(Person)  # <-- this generates a warning
```

```python
structured_llm.invoke("Alice is 30.")  # <-- works
```

```python
structured_llm.invoke("Hello!")  # <-- raises OutputParserException
```
2025-02-28 14:44:11 -05:00
ccurme
f8ed5007ea anthropic, mistral: return model_name in response metadata (#30048)
Took a "census" of models supported by init_chat_model-- of those that
return model names in response metadata, these were the only two that
had it keyed under `"model"` instead of `"model_name"`.
2025-02-28 18:56:05 +00:00
Christophe Bornet
9e6ffd1264 core: Add ruff rules PTH (pathlib) (#29338)
See https://docs.astral.sh/ruff/rules/#flake8-use-pathlib-pth

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-28 13:22:20 -05:00
TheSongg
86b364de3b Add asynchronous generate interface (#30001)
- [ ] **PR title**: [langchain_community.llms.xinference]: Add
asynchronous generate interface

- [ ] **PR message**: The asynchronous generate interface support stream
data and non-stream data.
          
        chain = prompt | llm
        async for chunk in chain.astream(input=user_input):
            yield chunk


- [ ] **Add tests and docs**:

       from langchain_community.llms import Xinference
       from langchain.prompts import PromptTemplate

       llm = Xinference(
server_url="http://0.0.0.0:9997", # replace your xinference server url
model_uid={model_uid} # replace model_uid with the model UID return from
launching the model
           stream = True
            )
prompt = PromptTemplate(input=['country'], template="Q: where can we
visit in the capital of {country}? A:")
       chain = prompt | llm
       async for chunk in chain.astream(input=user_input):
           yield chunk
2025-02-28 12:32:44 -05:00
Cheney Zhang
a1897ca621 docs: refine milvus doc with hybrid-search (#30037)
Milvus Document refinement: add more detailed hybrid search description
with full-text search introduction here.

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
2025-02-28 10:22:53 -05:00
Tiest van Gool
476cd26f57 Add xAI to ChatModelTabs drop down (#30028)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: add xAI to ChatModelTabs"

- [ ] **PR message**:
- **Description:** Added `ChatXAI` to `ChatModelTabs` dropdown to
improve visibility of xAI chat models (e.g., "grok-2", "grok-3").
    - **Issue:** Follow-up to #30010 
    - **Dependencies:** none
    - **Twitter handle:** @tiestvangool 

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-28 09:08:12 -05:00
Fakai Zhao
f07338d2bf Implementing the MMR algorithm for OLAP vector storage (#30033)
Thank you for contributing to LangChain!

-  **Implementing the MMR algorithm for OLAP vector storage**: 
  - Support Apache Doris and StarRocks OLAP database.
- Example: "vectorstore.as_retriever(search_type="mmr",
search_kwargs={"k": 10})"


- **Implementing the MMR algorithm for OLAP vector storage**: 
    - **Apache Doris
    - **StarRocks
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- **Add tests and docs**: 
- Example: "vectorstore.as_retriever(search_type="mmr",
search_kwargs={"k": 10})"


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: fakzhao <fakzhao@cisco.com>
2025-02-28 08:50:22 -05:00
Daniel Rauber
186cd7f1a1 community: PlaywrightURLLoader should wait for page load event before attempting to extract data (#30043)
## Description

The PlaywrightURLLoader should wait for a page to be loaded before
attempting to extract data.
2025-02-28 08:45:51 -05:00
Ikko Eltociear Ashimine
46908ee3da docs: update google_cloud_vertexai_rerank.ipynb (#30039)
recieve -> receive
2025-02-28 08:45:06 -05:00
ccurme
0dbcc1d099 docs: document anthropic features (#30030)
Update integrations page with extended thinking feature.

Update API reference with extended thinking and citations.
2025-02-27 19:37:04 -05:00
ccurme
6c7c8a164f openai[patch]: add unit test (#30022)
Test `max_completion_tokens` is propagated to payload for
AzureChatOpenAI.
2025-02-27 11:09:17 -05:00
DamonXue
156a60013a docs: fix tavily_search code-block format. (#30012)
This pull request includes a change to the `TavilySearchResults` class
in the `tool.py` file, which updates the code block format in the
documentation.

Documentation update:

*
[`libs/community/langchain_community/tools/tavily_search/tool.py`](diffhunk://#diff-e3b6a980979268b639c6a86e9b182756b0f7c7e9e5605e613bc0a72ea6aa5301L54-R59):
Changed the code block format from Python to JSON in the example
provided in the docstring.Thank you for contributing to LangChain!
2025-02-27 10:55:15 -05:00
kawamou
8977ac5ab0 community[fix]: Handle None value in raw_content from Tavily API response (#30021)
## **Description:**

When using the Tavily retriever with include_raw_content=True, the
retriever occasionally fails with a Pydantic ValidationError because
raw_content can be None.

The Document model in langchain_core/documents/base.py requires
page_content to be a non-None value, but the Tavily API sometimes
returns None for raw_content.

This PR fixes the issue by ensuring that even when raw_content is None,
an empty string is used instead:

```python
page_content=result.get("content", "")
            if not self.include_raw_content
            else (result.get("raw_content") or ""),
2025-02-27 10:53:53 -05:00
Yan
d0c9b98171 docs: writer integration docs cosmetic fixes (#29984)
Fixed links at Writer partners integration docs
2025-02-27 10:52:49 -05:00
Lakindu Boteju
f69deee1bd community: Add cost data for aws bedrock anthropic.claude-3-7 model (#30016)
This pull request includes updates to the
`libs/community/langchain_community/callbacks/bedrock_anthropic_callback.py`
file to add a new model version to the list of supported models.

Updates to supported models:

* Added support for the `anthropic.claude-3-7-sonnet-20250219-v1:0`
model with a rate of `0.003` for 1000 input tokens.
* Added support for the `anthropic.claude-3-7-sonnet-20250219-v1:0`
model with a rate of `0.015` for 1000 output tokens.

AWS Bedrock pricing reference : https://aws.amazon.com/bedrock/pricing
2025-02-27 09:51:52 -05:00
Mark Perfect
289b3422dc docs: Add Milvus Standalone to documentation (#29650)
- [x] **PR title**:


- [x] **PR message**:
- Added a new section for how to set up and use Milvus with Docker, and
added an example of how to instantiate Milvus for hybrid retrieval
- Fixed the documentation setup to run `make lint` and `make format`


- [x] **Add tests and docs**: If you're adding a new integration, please
include
N/A


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Mark Perfect <mark.anthony.perfect1@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 21:31:40 +00:00
Lakindu Boteju
e0e9e560b3 PyMuPDF4LLM integration to LangChain (#29953)
## PyMuPDF4LLM integration to LangChain for PDF content extraction in
Markdown format

### Description

[PyMuPDF4LLM](https://github.com/pymupdf/RAG) makes it easier to extract
PDF content in Markdown format, needed for LLM & RAG applications.
(License: GNU Affero General Public License v3.0)


[langchain-pymupdf4llm](https://github.com/lakinduboteju/langchain-pymupdf4llm)
integrates PyMuPDF4LLM to LangChain as a Document Loader.
(License: MIT License)

This pull request introduces the integration of
[PyMuPDF4LLM](https://pymupdf.readthedocs.io/en/latest/pymupdf4llm) into
the LangChain project as an integration package:
[`langchain-pymupdf4llm`](https://github.com/lakinduboteju/langchain-pymupdf4llm).
The most important changes include adding new Jupyter notebooks to
document the integration and updating the package configuration file to
include the new package.

### Documentation:

* `docs/docs/integrations/providers/pymupdf4llm.ipynb`: Added a new
Jupyter notebook to document the integration of `PyMuPDF4LLM` with
LangChain, including installation instructions and class imports.
* `docs/docs/integrations/document_loaders/pymupdf4llm.ipynb`: Added a
new Jupyter notebook to document the usage of `langchain-pymupdf4llm` as
a LangChain integration package in detail.

### Package registration:

* `libs/packages.yml`: Updated the package configuration file to include
the `langchain-pymupdf4llm` package.

### Additional information

* Related to: https://github.com/langchain-ai/langchain/pull/29848

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 15:59:12 -05:00
Dan Mirsky
d98c3f76c2 core[patch]: Fix FileCallbackHandler name resolution, Fixes #29941 (#29942)
- **Description:** Same changes as #26593 but for FileCallbackHandler
- **Issue:**  Fixes #29941
- **Dependencies:** None
- **Twitter handle:** None

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-26 14:54:24 -05:00
Christophe Bornet
b3885c124f core: Add ruff rules TC (#29268)
See https://docs.astral.sh/ruff/rules/#flake8-type-checking-tc
Some fixes done for TC001,TC002 and TC003 but these rules are excluded
since they don't play well with Pydantic.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 19:39:05 +00:00
talos
9cd20080fc community: Update SQLiteVec table trigger (#29914)
**Issue**: This trigger can only be used by the first table created.
Cannot create additional triggers for other tables.

**fixed**: Update the trigger name so that it can be used for new
tables.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 15:10:13 +00:00
ccurme
7562677f3f langchain[patch]: delete erroneous lock file (#30007)
Picked up during merge.
2025-02-26 15:01:05 +00:00
Erick Friis
3c96012f5e langchain: make numpy optional (#29182)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-26 14:35:24 +00:00
James Yang
8c28742980 docs: fix kinetica vectorstore typo (#29999)
Description:
- fix kinetica vectorstore typo
- add links

Co-authored-by: jamesongithub@users.noreply.github.com <jamesongithub@users.noreply.github.com>
2025-02-26 08:31:56 -05:00
Artem Yankov
6177b9f9ab community: add title, score and raw_content to tavily search results (#29995)
**Description:**

Tavily search results returned from API include useful information like
title, score and (optionally) raw_content that is missed in wrapper
although it's documented there properly. Add this data to the result
structure.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-25 23:27:21 +00:00
Eugene Yurtsev
b525226531 core[patch]: version 0.3.40 (#29997)
Version 0.3.40 release
2025-02-25 23:09:40 +00:00
Vadym Barda
0fc50b82a0 core[patch]: allow passing description to @tool decorator (#29976) 2025-02-25 17:45:36 -05:00
Naveen SK
21bfc95e14 docs: Correct grammatical typos in various documentation files (#29983)
**Description:**
Fixed grammatical typos in various documentation files

**Issue:**
N/A

**Dependencies:**
N/A

**Twitter handle:**
@MrNaveenSK

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-25 19:13:31 +00:00
ccurme
1158d3134d langchain[patch]: remove aiohttp (#29991)
My guess is this was left over from when `community` was in langchain.
2025-02-25 11:43:00 -05:00
ccurme
afd7888392 langchain[patch]: remove explicit dependency on tenacity (#29990)
Not used anywhere in `langchain`, already a dependency of
langchain-core.
2025-02-25 11:31:55 -05:00
naveencloud
143c39a4a4 Update gitlab.mdx (#29987)
Instead of Github it was mentioned that Gitlab which causing confusion
while refering the documentation

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-25 15:06:37 +00:00
ccurme
32704f0ad8 langchain: update extended test (#29988) 2025-02-25 14:58:20 +00:00
Yan
47e1a384f7 Writer partners integration docs (#29961)
**Documentation of Writer provider and additional features**
* [PyPi langchain-writer
web-page](https://pypi.org/project/langchain-writer/)
* [GitHub langchain-writer
repo](https://github.com/writer/langchain-writer)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-24 19:30:09 -05:00
Antonio Pisani
820a4c068c Transition prolog_tool doc to langgraph (#29972)
@ccurme As suggested I transitioned the prolog_tool documentation to
langgraph

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-24 23:34:53 +00:00
ccurme
79f5bbfb26 anthropic[patch]: release 0.3.8 (#29973) 2025-02-24 15:24:35 -05:00
ccurme
ded886f622 anthropic[patch]: support claude 3.7 sonnet (#29971) 2025-02-24 15:17:47 -05:00
Bagatur
d00d645829 docs[patch]: update disable_streaming docstring (#29968) 2025-02-24 18:40:31 +00:00
ccurme
b7a1705052 openai[patch]: release 0.3.7 (#29967) 2025-02-24 11:59:28 -05:00
ccurme
5437ee385b core[patch]: release 0.3.39 (#29966) 2025-02-24 11:47:01 -05:00
ccurme
291a232fb8 openai[patch]: set global ssl context (#29932)
We set 
```python
global_ssl_context = ssl.create_default_context(cafile=certifi.where())
```
at the module-level and share it among httpx clients.
2025-02-24 11:25:16 -05:00
ccurme
9ce07980b7 core[patch]: pydantic 2.11 compat (#29963)
Resolves https://github.com/langchain-ai/langchain/issues/29951

Was able to reproduce the issue with Anthropic installing from pydantic
`main` and correct it with the fix recommended in the issue.

Thanks very much @Viicos for finding the bug and the detailed writeup!
2025-02-24 11:11:25 -05:00
ccurme
0d3a3b99fc core[patch]: release 0.3.38 (#29962) 2025-02-24 15:04:53 +00:00
ccurme
b1a7f4e106 core, openai[patch]: support serialization of pydantic models in messages (#29940)
Resolves https://github.com/langchain-ai/langchain/issues/29003,
https://github.com/langchain-ai/langchain/issues/27264
Related: https://github.com/langchain-ai/langchain-redis/issues/52

```python
from langchain.chat_models import init_chat_model
from langchain.globals import set_llm_cache
from langchain_community.cache import SQLiteCache
from pydantic import BaseModel

cache = SQLiteCache()

set_llm_cache(cache)

class Temperature(BaseModel):
    value: int
    city: str

llm = init_chat_model("openai:gpt-4o-mini")
structured_llm = llm.with_structured_output(Temperature)
```
```python
# 681 ms
response = structured_llm.invoke("What is the average temperature of Rome in May?")
```
```python
# 6.98 ms
response = structured_llm.invoke("What is the average temperature of Rome in May?")
```
2025-02-24 09:34:27 -05:00
HackHuang
1645ec1890 docs(tool_artifacts.ipynb) : Remove the unnecessary information (#29960)
Update tool_artifacts.ipynb : Remove the unnecessary information as
below.


8b511a3a78/docs/docs/how_to/tool_artifacts.ipynb (L95)
2025-02-24 09:22:18 -05:00
HackHuang
78c54fccf3 docs(custom_tools.ipynb) : Fix the invalid URL link (#29958)
Update custom_tools.ipynb : Fix the invalid URL link about `@tool
decorator`
2025-02-24 14:03:26 +00:00
ccurme
927ec20b69 openai[patch]: update system role to developer for o-series models (#29785)
Some o-series models will raise a 400 error for `"role": "system"`
(`o1-mini` and `o1-preview` will raise, `o1` and `o3-mini` will not).

Here we update `ChatOpenAI` to update the role to `"developer"` for all
model names matching `^o\d`.

We only make this change on the ChatOpenAI class (not BaseChatOpenAI).
2025-02-24 08:59:46 -05:00
Ahmed Tammaa
8b511a3a78 [Exception Handling] DeepSeek JSONDecodeError (#29758)
For Context please check #29626 

The Deepseek is using langchain_openai. The error happens that it show
`json decode error`.

I added a handler for this to give a more sensible error message which
is DeepSeek API returned empty/invalid json.

Reproducing the issue is a bit challenging as it is inconsistent,
sometimes DeepSeek returns valid data and in other times it returns
invalid data which triggers the JSON Decode Error.

This PR is an exception handling, but not an ultimate fix for the issue.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-23 15:00:32 -05:00
Julien Elkaim
e586bffe51 community: Repair embeddings/llamacpp's embed_query method (#29935)
**Description:** As commented on the commit
[41b6a86](41b6a86bbe)
it introduced a bug for when we do an embedding request and the model
returns a non-nested list. Typically it's the case for model
**_nomic-embed-text_**.

- I added the unit test, and ran `make format`, `make lint` and `make
test` from the `community` package.
- No new dependency.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-23 19:32:17 +00:00
Saraswathy Kalaiselvan
5ca4933b9d docs: updated ChatLiteLLM model_kwargs description (#29937)
- [x] **PR title**: docs: (community) update ChatLiteLLM

- [x] **PR message**:
- **Description:** updated description of model_kwargs parameter which
was wrongly describing for temperature.
    - **Issue:** #29862 
    - **Dependencies:** N/A
    
- [x] **Add tests and docs**: N/A

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-23 19:27:13 +00:00
ccurme
512eb1b764 anthropic[patch]: update models for integration tests (#29938) 2025-02-23 14:23:48 -05:00
Christophe Bornet
f6d4fec4d5 core: Add ruff rules ANN (type annotations) (#29271)
See https://docs.astral.sh/ruff/rules/#flake8-annotations-ann
The interest compared to only mypy is that ruff is very fast at
detecting missing annotations.

ANN101 and ANN102 are deprecated so we ignore them 
ANN401 (no Any type) ignored to be in sync with mypy config

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-22 17:46:28 -05:00
Bagatur
979a991dc2 core[patch]: dont deep copy merge_message_runs (#28454)
afaict no need to deep copy here, if we merge messages then we convert
them to chunks first anyways
2025-02-22 21:56:45 +00:00
Mohammad Mohtashim
afa94e5bf7 _wait_for_run calling fix for OpenAIAssistantRunnable (#29927)
- **Description:** Fixed the `OpenAIAssistantRunnable` call of
`_wait_for_run`
- **Issue:**  #29923
2025-02-22 00:27:24 +00:00
Vadym Barda
437fe6d216 core[patch]: return ToolMessage from tools when tool call ID is empty string (#29921) 2025-02-21 11:53:15 -05:00
Taofiq Aiyelabegan
5ee8a8f063 [Integration]: Langchain-Permit (#29867)
## Which area of LangChain is being modified?
- This PR adds a new "Permit" integration to the `docs/integrations/`
folder.
- Introduces two new Tools (`LangchainJWTValidationTool` and
`LangchainPermissionsCheckTool`)
- Introduces two new Retrievers (`PermitSelfQueryRetriever` and
`PermitEnsembleRetriever`)
- Adds demo scripts in `examples/` showcasing usage.

## Description of Changes
- Created `langchain_permit/tools.py` for JWT validation and permission
checks with Permit.
- Created `langchain_permit/retrievers.py` for custom Permit-based
retrievers.
- Added documentation in `docs/integrations/providers/permit.ipynb` (or
`.mdx`) to explain setup, usage, and examples.
- Provided sample scripts in `examples/demo_scripts/` to illustrate
usage of these tools and retrievers.
- Ensured all code is linted and tested locally.

Thank you again for reviewing!

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-21 10:59:00 -05:00
Jean-Philippe Dournel
ebe38baaf9 community/mlx_pipeline: fix crash at mlx call (#29915)
- **Description:** 
Since mlx_lm 0.20, all calls to mlx crash due to deprecation of the way
parameters are passed to methods generate and generate_step.
Parameters top_p, temp, repetition_penalty and repetition_context_size
are not passed directly to those method anymore but wrapped into
"sampler" and "logit_processor".


- **Dependencies:** mlx_lm (optional)

-  **Tests:** 
I've had a new test to existing test file:
tests/integration_tests/llms/test_mlx_pipeline.py

---------

Co-authored-by: Jean-Philippe Dournel <jp@insightkeeper.io>
2025-02-21 09:14:53 -05:00
Sinan CAN
bd773cffc3 docs: remove redundant cell in sql_large_db guide (#29917)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-21 08:38:49 -05:00
ccurme
1fa9f6bc20 docs: build mongo in api ref (#29908) 2025-02-20 19:58:35 -05:00
Chaunte W. Lacewell
d972c6d6ea partners: add langchain-vdms (#29857)
**Description:** Deprecate vdms in community, add integration
langchain-vdms, and update any related files
**Issue:** n/a
**Dependencies:** langchain-vdms
**Twitter handle:** n/a

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 19:48:46 -05:00
Mohammad Mohtashim
8293142fa0 mistral[patch]: support model_kwargs (#29838)
- **Description:** Frequency_penalty added as a client parameter
- **Issue:** #29803

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 18:47:39 -05:00
ccurme
924d9b1b33 cli[patch]: fix retriever template (#29907)
Chat model tabs don't render correctly in .ipynb template.
2025-02-20 17:51:19 +00:00
Brayden Zhong
a70f31de5f Community: RankLLMRerank AttributeError (Handle list-based rerank results) (#29840)
# community: Fix AttributeError in RankLLMRerank (`list` object has no
attribute `candidates`)

## **Description**
This PR fixes an issue in `RankLLMRerank` where reranking fails with the
following error:

```
AttributeError: 'list' object has no attribute 'candidates'
```

The issue arises because `rerank_batch()` returns a `List[Result]`
instead of an object containing `.candidates`.

### **Changes Introduced**
- Adjusted `compress_documents()` to support both:
  - Old API format: `rerank_results.candidates`
  - New API format: `rerank_results` as a list
  - Also fix wrong .txt location parsing while I was at it.

---

## **Issue**
Fixes **AttributeError** in `RankLLMRerank` when using
`compression_retriever.invoke()`. The issue is observed when
`rerank_batch()` returns a list instead of an object with `.candidates`.

**Relevant log:**
```
AttributeError: 'list' object has no attribute 'candidates'
```

## **Dependencies**
- No additional dependencies introduced.

---

## **Checklist**
- [x] **Backward compatible** with previous API versions
- [x] **Tested** locally with different RankLLM models
- [x] **No new dependencies introduced**
- [x] **Linted** with `make format && make lint`
- [x] **Ready for review**

---

## **Testing**
- Ran `compression_retriever.invoke(query)`

## **Reviewers**
If no review within a few days, please **@mention** one of:
- @baskaryan
- @efriis
- @eyurtsev
- @ccurme
- @vbarda
- @hwchase17
2025-02-20 12:38:31 -05:00
Levon Ghukasyan
ec403c442a Separate deepale vector store (#29902)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 17:37:19 +00:00
Jorge Piedrahita Ortiz
3acf842e35 core: add sambanova chat models to load module mapping (#29855)
- **Description:** add sambanova integration package chat models to load
module mapping, to allow serialization and deserialization
2025-02-20 12:30:50 -05:00
ccurme
d227e4a08e mistralai[patch]: release 0.2.7 (#29906) 2025-02-20 17:27:12 +00:00
Hande
d8bab89e6e community: add cognee retriever (#29878)
This PR adds a new cognee integration, knowledge graph based retrieval
enabling developers to ingest documents into cognee’s knowledge graph,
process them, and then retrieve context via CogneeRetriever.
It includes:
- langchain_cognee package with a CogneeRetriever class
- a test for the integration, demonstrating how to create, process, and
retrieve with cognee
- an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


Followed additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

Thank you for the review!

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 17:15:23 +00:00
Sinan CAN
97dd5f45ae Update retrieval.mdx (#29905)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 17:12:29 +00:00
dokato
92b415a9f6 community: Made some Jira fields optional for agent to work correctly (#29876)
**Description:** Two small changes have been proposed here:
(1)
Previous code assumes that every issue has a priority field. If an issue
lacks this field, the code will raise a KeyError.
Now, the code checks if priority exists before accessing it. If priority
is missing, it assigns None instead of crashing. This prevents runtime
errors when processing issues without a priority.

(2)

Also If the "style" field is missing, the code throws a KeyError.
`.get("style", None)` safely retrieves the value if present.

**Issue:** #29875 

**Dependencies:** N/A
2025-02-20 12:10:11 -05:00
am-kinetica
ca7eccba1f Handled a bug around empty query results differently (#29877)
Thank you for contributing to LangChain!

- [ ] **Handled query records properly**: "community:
vectorstores/kinetica"

- [ ] **Bugfix for empty query results handling**: 
- **Description:** checked for the number of records returned by a query
before processing further
- **Issue:** resulted in an `AttributeError` earlier which has now been
fixed

@efriis
2025-02-20 12:07:49 -05:00
Antonio Pisani
2c403a3ea9 docs: Add langchain-prolog documentation (#29788)
I want to add documentation for a new integration with SWI-Prolog.

@hwchase17 check this out:

https://github.com/apisani1/langchain-prolog/tree/main/examples/travel_agent

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 11:50:28 -05:00
Marlene
be7fa920fa Partner: Azure AI Langchain Docs and Package Registry (#29879)
This PR adds documentation for the Azure AI package in Langchain to the
main mono-repo

No issue connected or updated dependencies.

Utilises existing tests and makes updates to the docs

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-20 14:35:26 +00:00
Hankyeol Kyung
2dd0ce3077 openai: Update reasoning_effort arg documentation (#29897)
**Description:** Update docstring for `reasoning_effort` argument to
specify that it applies to reasoning models only (e.g., OpenAI o1 and
o3-mini), clarifying its supported models.
**Issue:** None
**Dependencies:** None
2025-02-20 09:03:42 -05:00
Joe Ferrucci
c28ee329c9 Fix typo in local_llms.ipynb docs (#29903)
Change `tailed` to `tailored`

`Docs > How-To > Local LLMs:`

https://python.langchain.com/docs/how_to/local_llms/#:~:text=use%20a%20prompt-,tailed,-for%20your%20specific
2025-02-20 09:03:10 -05:00
ccurme
ed3c2bd557 core[patch]: set version="v2" as default in astream_events (#29894) 2025-02-19 23:21:37 +00:00
Fabian Blatz
a2d05a376c community: ConfluenceLoader: add a filter method for attachments (#29882)
Adds a `attachment_filter_func` parameter to the ConfluenceLoader class
which can be used to determine which files are indexed. This is useful
if you are interested in excluding files based on their media type or
other metadata.
2025-02-19 18:20:45 -05:00
ccurme
9ed47a4d63 community[patch]: release 0.3.18 (#29896) 2025-02-19 20:13:00 +00:00
ccurme
92889edafd core[patch]: release 0.3.37 (#29895) 2025-02-19 20:04:35 +00:00
ccurme
ffd6194060 core[patch]: de-beta rate limiters (#29891) 2025-02-19 19:19:59 +00:00
Erick Friis
5637210a20 infra: run docs build on packages.yml updates (#29796)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-19 18:45:30 +00:00
ccurme
fb4c8423f0 docs: fix builds (#29890)
Missed in https://github.com/langchain-ai/langchain/pull/29889
2025-02-19 13:35:59 -05:00
ccurme
68b13e5172 pinecone: delete from monorepo (#29889)
This now lives in https://github.com/langchain-ai/langchain-pinecone
2025-02-19 12:55:15 -05:00
Erick Friis
6c1e21d128 core: basemessage.text() (#29078) 2025-02-18 17:45:44 -08:00
Ben Burns
e2ba336e72 docs: fix partner package table build for packages with no download stats (#29871)
The build in #29867 is currently broken because `langchain-cli` didn't
add download stats to the provider file.

This change gracefully handles sorting packages with missing download
counts. I initially updated the build to fetch download counts on every
run, but pypistats [requests](https://pypistats.org/api/) that users not
fetch stats like this via CI.
2025-02-19 11:05:57 +13:00
Eugene Yurtsev
8e5074d82d core: release 0.3.36 (#29869)
Release 0.3.36
2025-02-18 19:51:43 +00:00
Vadym Barda
d04fa1ae50 core[patch]: allow passing JSON schema as args_schema to tools (#29812) 2025-02-18 14:44:31 -05:00
ccurme
5034a8dc5c xai[patch]: release 0.2.1 (#29854) 2025-02-17 14:30:41 -05:00
ccurme
83dcef234d xai[patch]: support dedicated structured output feature (#29853)
https://docs.x.ai/docs/guides/structured-outputs

Interface appears identical to OpenAI's.
```python
from langchain.chat_models import init_chat_model
from pydantic import BaseModel

class Joke(BaseModel):
    setup: str
    punchline: str

llm = init_chat_model("xai:grok-2").with_structured_output(
    Joke, method="json_schema"
)
llm.invoke("Tell me a joke about cats.")
```
2025-02-17 14:19:51 -05:00
ccurme
9d6fcd0bfb infra: add xai to scheduled testing (#29852) 2025-02-17 18:59:45 +00:00
ccurme
8a3b05ae69 langchain[patch]: release 0.3.19 (#29851) 2025-02-17 13:36:23 -05:00
ccurme
c9061162a1 langchain[patch]: add xai to extras (#29850) 2025-02-17 17:49:34 +00:00
Bagatur
1acf57e9bd langchain[patch]: init_chat_model xai support (#29849) 2025-02-17 09:45:39 -08:00
Paul Nikonowicz
1a55da9ff4 docs: Update gemini vector docs (#29841)
# Description

2 changes: 
1. removes get pass from the code example as it reads from stdio causing
a freeze to occur
2. updates to the latest gemini model in the example

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-17 07:54:23 -05:00
hsm207
037b129b86 weaviate: Add-deprecation-warning (#29757)
- **Description:** add deprecation warning when using weaviate from
langchain_community
  - **Issue:** NA
  - **Dependencies:** NA
  - **Twitter handle:** NA

---------

Signed-off-by: hsm207 <hsm207@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-16 21:42:18 -05:00
Đỗ Quang Minh
cd198ac9ed community: add custom model for OpenAIWhisperParser (#29831)
Add `model` properties for OpenAIWhisperParser. Defaulted to `whisper-1`
(previous value).
Please help me update the docs and other related components of this
repo.
2025-02-16 21:26:07 -05:00
Cole McIntosh
6874c9c1d0 docs: add notebook for langchain-salesforce package (#29800)
**Description:**  
This PR adds a Jupyter notebook that explains the features,
installation, and usage of the
[`langchain-salesforce`](https://github.com/colesmcintosh/langchain-salesforce)
package. The notebook includes:
- Setup instructions for configuring Salesforce credentials  
- Example code demonstrating common operations such as querying,
describing objects, creating, updating, and deleting records

**Issue:**  
N/A

**Dependencies:**  
No new dependencies are required.

**Tests and Docs:**  
- Added an example notebook demonstrating the usage of the
`langchain-salesforce` package, located in `docs/docs/integrations`.

**Lint and Test:**  
- Ran `make format`, `make lint`, and `make test` successfully.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-16 08:34:57 -05:00
Jan Heimes
60f58df5b3 community: add top_k as param to Needle Retriever (#29821)
Thank you for contributing to LangChain!

- [X] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: 
This PR adds top_k as a param to the Needle Retriever. By default we use
top 10.



- [X] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-16 08:30:52 -05:00
Mateusz Szewczyk
8147679169 docs: Rename IBM product name to IBM watsonx (#29802)
Thank you for contributing to LangChain!

Rename IBM product name to `IBM watsonx`

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-15 21:48:02 -05:00
Jesus Fernandez Bes
1dfac909d8 community: Adding IN Operator to AzureCosmosDBNoSQLVectorStore (#29805)
- ** Description**: I have added a new operator in the operator map with
key `$in` and value `IN`, so that you can define filters using lists as
values. This was already contemplated but as IN operator was not in the
map they cannot be used.
- **Issue**: Fixes #29804.
- **Dependencies**: No extra.
2025-02-15 21:44:54 -05:00
Wahed Hemati
8901b113c3 docs: add Discord integration docs (#29822)
This PR adds documentation for the `langchain-discord-shikenso`
integration, including an example notebook at
`docs/docs/integrations/tools/discord.ipynb` and updates to
`libs/packages.yml` to track the new package.

  **Issue:**  
  N/A

  **Dependencies:**  
  None

  **Twitter handle:**  
  N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-15 21:43:45 -05:00
Akmal Ali Jasmin
f1792e486e fix: Correct getpass usage in Google Generative AI Embedding docs (#29809) (#29810)
**fix: Correct getpass usage in Google Generative AI Embedding docs
(#29809)**

- **Description:** Corrected the `getpass` usage in the Google
Generative AI Embedding documentation by replacing `getpass()` with
`getpass.getpass()` to fix the `TypeError`.
- **Issue:** #29809  
- **Dependencies:** None  

**Additional Notes:**  
The change ensures compatibility with Google Colab and follows Python's
`getpass` module usage standards.
2025-02-15 21:41:00 -05:00
HackHuang
80ca310c15 langchain : Add the full code snippet in rag.ipynb (#29820)
docs(rag.ipynb) : Add the `full code` snippet, it’s necessary and useful
for beginners to demonstrate.

Preview the change :
https://langchain-git-fork-googtech-patch-3-langchain.vercel.app/docs/tutorials/rag/

Two `full code` snippets are added as below :
<details>
<summary>Full Code:</summary>

```python
import bs4
from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.chat_models import init_chat_model
from langchain_openai import OpenAIEmbeddings
from langchain_core.vectorstores import InMemoryVectorStore
from google.colab import userdata
from langchain_core.prompts import PromptTemplate
from langchain_core.documents import Document
from typing_extensions import List, TypedDict
from langgraph.graph import START, StateGraph

#################################################
# 1.Initialize the ChatModel and EmbeddingModel #
#################################################
llm = init_chat_model(
    model="gpt-4o-mini",
    model_provider="openai",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)
embeddings = OpenAIEmbeddings(
    model="text-embedding-3-large",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)

#######################
# 2.Loading documents #
#######################
loader = WebBaseLoader(
    web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
    bs_kwargs=dict(
        # Only keep post title, headers, and content from the full HTML.
        parse_only=bs4.SoupStrainer(
            class_=("post-content", "post-title", "post-header")
        )
    ),
)
docs = loader.load()

#########################
# 3.Splitting documents #
#########################
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,  # chunk size (characters)
    chunk_overlap=200,  # chunk overlap (characters)
    add_start_index=True,  # track index in original document
)
all_splits = text_splitter.split_documents(docs)

###########################################################
# 4.Embedding documents and storing them in a vectorstore #
###########################################################
vector_store = InMemoryVectorStore(embeddings)
_ = vector_store.add_documents(documents=all_splits)

##########################################################
# 5.Customizing the prompt or loading it from Prompt Hub #
##########################################################
# prompt = hub.pull("rlm/rag-prompt") # load the prompt from the prompt-hub
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.

{context}

Question: {question}

Helpful Answer:"""
prompt = PromptTemplate.from_template(template)

##################################################################################################
# 5.Using LangGraph to tie together the retrieval and generation steps into a single application #                               #
##################################################################################################
# 5.1.Define the state of application, which controls the application datas
class State(TypedDict):
    question: str
    context: List[Document]
    answer: str

# 5.2.1.Define the node of application, which signifies the application steps
def retrieve(state: State):
    retrieved_docs = vector_store.similarity_search(state["question"])
    return {"context": retrieved_docs}

# 5.2.2.Define the node of application, which signifies the application steps
def generate(state: State):
    docs_content = "\n\n".join(doc.page_content for doc in state["context"])
    messages = prompt.invoke({"question": state["question"], "context": docs_content})
    response = llm.invoke(messages)
    return {"answer": response.content}

# 6.Define the "control flow" of application, which signifies the ordering of the application steps
graph_builder = StateGraph(State).add_sequence([retrieve, generate])
graph_builder.add_edge(START, "retrieve")
graph = graph_builder.compile()
```

</details>

<details>
<summary>Full Code:</summary>

```python
import bs4
from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.chat_models import init_chat_model
from langchain_openai import OpenAIEmbeddings
from langchain_core.vectorstores import InMemoryVectorStore
from google.colab import userdata
from langchain_core.prompts import PromptTemplate
from langchain_core.documents import Document
from typing_extensions import List, TypedDict
from langgraph.graph import START, StateGraph
from typing import Literal
from typing_extensions import Annotated

#################################################
# 1.Initialize the ChatModel and EmbeddingModel #
#################################################
llm = init_chat_model(
    model="gpt-4o-mini",
    model_provider="openai",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)
embeddings = OpenAIEmbeddings(
    model="text-embedding-3-large",
    openai_api_key=userdata.get('OPENAI_API_KEY'),
    base_url=userdata.get('BASE_URL'),
)

#######################
# 2.Loading documents #
#######################
loader = WebBaseLoader(
    web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
    bs_kwargs=dict(
        # Only keep post title, headers, and content from the full HTML.
        parse_only=bs4.SoupStrainer(
            class_=("post-content", "post-title", "post-header")
        )
    ),
)
docs = loader.load()

#########################
# 3.Splitting documents #
#########################
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,  # chunk size (characters)
    chunk_overlap=200,  # chunk overlap (characters)
    add_start_index=True,  # track index in original document
)
all_splits = text_splitter.split_documents(docs)

# Search analysis: Add some metadata to the documents in our vector store,
# so that we can filter on section later. 
total_documents = len(all_splits)
third = total_documents // 3
for i, document in enumerate(all_splits):
    if i < third:
        document.metadata["section"] = "beginning"
    elif i < 2 * third:
        document.metadata["section"] = "middle"
    else:
        document.metadata["section"] = "end"

# Search analysis: Define the schema for our search query
class Search(TypedDict):
    query: Annotated[str, ..., "Search query to run."]
    section: Annotated[
        Literal["beginning", "middle", "end"], ..., "Section to query."]

###########################################################
# 4.Embedding documents and storing them in a vectorstore #
###########################################################
vector_store = InMemoryVectorStore(embeddings)
_ = vector_store.add_documents(documents=all_splits)

##########################################################
# 5.Customizing the prompt or loading it from Prompt Hub #
##########################################################
# prompt = hub.pull("rlm/rag-prompt") # load the prompt from the prompt-hub
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.

{context}

Question: {question}

Helpful Answer:"""
prompt = PromptTemplate.from_template(template)

###################################################################
# 5.Using LangGraph to tie together the analyze_query, retrieval  #
# and generation steps into a single application                  #
###################################################################
# 5.1.Define the state of application, which controls the application datas
class State(TypedDict):
    question: str
    query: Search
    context: List[Document]
    answer: str

# Search analysis: Define the node of application, 
# which be used to generate a query from the user's raw input
def analyze_query(state: State):
    structured_llm = llm.with_structured_output(Search)
    query = structured_llm.invoke(state["question"])
    return {"query": query}

# 5.2.1.Define the node of application, which signifies the application steps
def retrieve(state: State):
    query = state["query"]
    retrieved_docs = vector_store.similarity_search(
        query["query"],
        filter=lambda doc: doc.metadata.get("section") == query["section"],
    )
    return {"context": retrieved_docs}

# 5.2.2.Define the node of application, which signifies the application steps
def generate(state: State):
    docs_content = "\n\n".join(doc.page_content for doc in state["context"])
    messages = prompt.invoke({"question": state["question"], "context": docs_content})
    response = llm.invoke(messages)
    return {"answer": response.content}

# 6.Define the "control flow" of application, which signifies the ordering of the application steps
graph_builder = StateGraph(State).add_sequence([analyze_query, retrieve, generate]) 
graph_builder.add_edge(START, "analyze_query")
graph = graph_builder.compile()
```

</details>

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-15 21:37:58 -05:00
Michael Chin
b2c21f3e57 docs: Update SagemakerEndpoint examples (#29814)
Related issue: https://github.com/langchain-ai/langchain-aws/issues/361

Updated the AWS `SagemakerEndpoint` LLM documentation to import from
`langchain-aws`.
2025-02-15 21:34:56 -05:00
Krishna Kulkarni
a98c5f1c4b langchain_community: add image support to DuckDuckGoSearchAPIWrapper (#29816)
- [ ] **PR title**: langchain_community: add image support to
DuckDuckGoSearchAPIWrapper

- **Description:** This PR enhances the DuckDuckGoSearchAPIWrapper
within the langchain_community package by introducing support for image
searches. The enhancement includes:
  - Adding a new method _ddgs_images to handle image search queries.
- Updating the run and results methods to process and return image
search results appropriately.
- Modifying the source parameter to accept "images" as a valid option,
alongside "text" and "news".
- **Dependencies:** No additional dependencies are required for this
change.
2025-02-15 21:32:14 -05:00
Iris Liu
0d9f0b4215 docs: updates Chroma integration API ref docs (#29826)
- Description: updates Chroma integration API ref docs
- Issue: #29817
- Dependencies: N/A
- Twitter handle: @irieliu

Co-authored-by: “Iris <“liuirisny@gmail.com”>
2025-02-15 21:05:21 -05:00
ccurme
3fe7c07394 openai[patch]: release 0.3.6 (#29824) 2025-02-15 13:53:35 -05:00
ccurme
65a6dce428 openai[patch]: enable streaming for o1 (#29823)
Verified streaming works for the `o1-2024-12-17` snapshot as well.
2025-02-15 12:42:05 -05:00
Christophe Bornet
3dffee3d0b all: Bump blockbuster version to 1.5.18 (#29806)
Has fixes for running on Windows and non-CPython runtimes.
2025-02-14 07:55:38 -08:00
ccurme
d9a069c414 tests[patch]: release 0.3.12 (#29797) 2025-02-13 23:57:44 +00:00
ccurme
e4f106ea62 groq[patch]: remove xfails (#29794)
These appear to pass.
2025-02-13 15:49:50 -08:00
Erick Friis
f34e62ef42 packages: add langchain-xai (#29795)
wasn't registered per the contribution guide:
https://python.langchain.com/docs/contributing/how_to/integrations/
2025-02-13 15:36:41 -08:00
ccurme
49cc6106f7 tests[patch]: fix query for test_tool_calling_with_no_arguments (#29793) 2025-02-13 23:15:52 +00:00
Erick Friis
1a225fad03 multiple: fix uv path deps (#29790)
file:// format wasn't working with updates - it doesn't install as an
editable dep

move to tool.uv.sources with path= instead
2025-02-13 21:32:34 +00:00
Erick Friis
ff13384eb6 packages: update counts, add command (#29789) 2025-02-13 20:45:25 +00:00
Mateusz Szewczyk
8d0e31cbc5 docs: Fix model_id on EmbeddingTabs page (#29784)
Thank you for contributing to LangChain!

Fix `model_id` in IBM provider on EmbeddingTabs page

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-13 09:41:51 -08:00
Mateusz Szewczyk
61f1be2152 docs: Added IBM to ChatModelTabs and EmbeddingTabs (#29774)
Thank you for contributing to LangChain!

Added IBM to ChatModelTabs and EmbeddingTabs

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-13 08:43:42 -08:00
HackHuang
76d32754ff core : update the class docs of InMemoryVectorStore in in_memory.py (#29781)
- **Description:** Add the new introduction about checking `store` in
in_memory.py, It’s necessary and useful for beginners.
```python
Check Documents:
    .. code-block:: python
    
        for doc in vector_store.store.values():
            print(doc)
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 16:41:47 +00:00
Mateusz Szewczyk
b82cef36a5 docs: Update IBM WatsonxLLM and ChatWatsonx documentation (#29752)
Thank you for contributing to LangChain!

Update presented model in `WatsonxLLM` and `ChatWatsonx` documentation.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-02-13 08:41:07 -08:00
Mohammad Mohtashim
96ad09fa2d (Community): Added API Key for Jina Search API Wrapper (#29622)
- **Description:** Simple change for adding the API Key for Jina Search
API Wrapper
- **Issue:** #29596
2025-02-12 20:12:07 -08:00
ccurme
f1c66a3040 docs: minor fix to provider table (#29771)
Langfair renders as LangfAIr
2025-02-13 04:06:58 +00:00
Jakub Kopecký
c8cb7c25bf docs: update apify integration (#29553)
**Description:** Fixed and updated Apify integration documentation to
use the new [langchain-apify](https://github.com/apify/langchain-apify)
package.
**Twitter handle:** @apify
2025-02-12 20:02:55 -08:00
ccurme
16fb1f5371 chroma[patch]: release 0.2.2 (#29769)
Resolves https://github.com/langchain-ai/langchain/issues/29765
2025-02-13 02:39:16 +00:00
Mohammad Mohtashim
2310847c0f (Chroma): Small Fix in add_texts when checking for embeddings (#29766)
- **Description:** Small fix in `add_texts` to make embedding
nullability is checked properly.
- **Issue:** #29765

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 02:26:13 +00:00
Eric Pinzur
716fd89d8e docs: contributed Graph RAG Retriever integration (#29744)
**Description:** 

This adds the `Graph RAG` Retriever integration documentation, per
https://python.langchain.com/docs/contributing/how_to/integrations/.

* The integration exists in this public repository:
https://github.com/datastax/graph-rag
* We've implemented the standard langchain tests for retrievers:
https://github.com/datastax/graph-rag/blob/main/packages/langchain-graph-retriever/tests/test_langchain.py
* Our integration is published to PyPi:
https://pypi.org/project/langchain-graph-retriever/
2025-02-12 18:25:48 -08:00
Sunish Sheth
f42dafa809 Deprecating sql_database access for creating UC functions for agent tools (#29745)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-13 02:24:44 +00:00
Thor 雷神 Schaeff
a0970d8d7e [WIP] chore: update ElevenLabs tool. (#29722)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 01:54:34 +00:00
Chaymae El Aattabi
4b08a7e8e8 Fix #29759: Use local chunk_size_ for looping in embed_documents (#29761)
This fix ensures that the chunk size is correctly determined when
processing text embeddings. Previously, the code did not properly handle
cases where chunk_size was None, potentially leading to incorrect
chunking behavior.

Now, chunk_size_ is explicitly set to either the provided chunk_size or
the default self.chunk_size, ensuring consistent chunking. This update
improves reliability when processing large text inputs in batches and
prevents unintended behavior when chunk_size is not specified.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-02-13 01:28:26 +00:00
Jorge Piedrahita Ortiz
1fbc01c350 docs: update sambanova integration api reference links (#29762)
- **Description:** update sambanova external package integration api
reference links in docs
2025-02-12 15:58:00 -08:00
Sunish Sheth
043d78d85d Deprecate langhchain community ucfunctiontoolkit in favor for databricks_langchain (#29746)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2025-02-12 15:50:35 -08:00
Hugues Chocart
e4eec9e9aa community: add langchain-abso documentation (#29739)
Add the documentation for the community package `langchain-abso`. It
provides a new Chat Model class, that uses https://abso.ai

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2025-02-12 19:57:33 +00:00
ccurme
e61f463745 core[patch]: release 0.3.35 (#29764) 2025-02-12 18:13:10 +00:00
Nuno Campos
fe59f2cc88 core: Fix output of convert_messages when called with BaseMessage.model_dump() (#29763)
- additional_kwargs was being nested twice
- example, response_metadata was placed inside additional_kwargs
2025-02-12 10:05:33 -08:00
Jacob Lee
f4e3e86fbb feat(langchain): Infer o3 modelstrings passed to init_chat_model as OpenAI (#29743) 2025-02-11 16:51:41 -08:00
Mohammad Mohtashim
9f3bcee30a (Community): Adding Structured Support for ChatPerplexity (#29361)
- **Description:** Adding Structured Support for ChatPerplexity
- **Issue:** #29357
- This is implemented as per the Perplexity official docs:
https://docs.perplexity.ai/guides/structured-outputs

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-11 15:51:18 -08:00
Jawahar S
994c5465e0 feat: add support for IBM WatsonX AI chat models (#29688)
**Description:** Updated init_chat_model to support Granite models
deployed on IBM WatsonX
**Dependencies:**
[langchain-ibm](https://github.com/langchain-ai/langchain-ibm)

Tagging @baskaryan @efriis for review when you get a chance.
2025-02-11 15:34:29 -08:00
Shailendra Mishra
c7d74eb7a3 Oraclevs integration (#29723)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"
  community: langchain_community/vectorstore/oraclevs.py


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Refactored code to allow a connection or a connection
pool.
- **Issue:** Normally an idel connection is terminated by the server
side listener at timeout. A user thus has to re-instantiate the vector
store. The timeout in case of connection is not configurable. The
solution is to use a connection pool where a user can specify a user
defined timeout and the connections are managed by the pool.
    - **Dependencies:** None
    - **Twitter handle:** 


- [ ] **Add tests and docs**: This is not a new integration. A user can
pass either a connection or a connection pool. The determination of what
is passed is made at run time. Everything should work as before.

- [ ] **Lint and test**:  Already done.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2025-02-11 14:56:55 -08:00
ccurme
42ebf6ae0c deepseek[patch]: release 0.1.2 (#29742) 2025-02-11 11:53:43 -08:00
ccurme
ec55553807 pinecone[patch]: release 0.2.3 (#29741) 2025-02-11 19:27:39 +00:00
ccurme
001cf99253 pinecone[patch]: add support for python 3.13 (#29737) 2025-02-11 11:20:21 -08:00
ccurme
ba8f752bf5 openai[patch]: release 0.3.5 (#29740) 2025-02-11 19:20:11 +00:00
ccurme
9477f49409 openai, deepseek: make _convert_chunk_to_generation_chunk an instance method (#29731)
1. Make `_convert_chunk_to_generation_chunk` an instance method on
BaseChatOpenAI
2. Override on ChatDeepSeek to add `"reasoning_content"` to message
additional_kwargs.

Resolves https://github.com/langchain-ai/langchain/issues/29513
2025-02-11 11:13:23 -08:00
Christopher Menon
1edd27d860 docs: fix SQL-based metadata filter syntax, add link to BigQuery docs (#29736)
Fix the syntax for SQL-based metadata filtering in the [Google BigQuery
Vector Search
docs](https://python.langchain.com/docs/integrations/vectorstores/google_bigquery_vector_search/#searching-documents-with-metadata-filters).
Also add a link to learn more about BigQuery operators that can be used
here.

I have been using this library, and have found that this is the correct
syntax to use for the SQL-based filters.

**Issue**: no open issue.
**Dependencies**: none.
**Twitter handle**: none.

No tests as this is only a change to the documentation.

<!-- Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. -->
2025-02-11 11:10:12 -08:00
ccurme
d0c2dc06d5 mongodb[patch]: fix link in readme (#29738) 2025-02-11 18:19:59 +00:00
zzaebok
3b3d52206f community: change wikidata rest api version from v0 to v1 (#29708)
**Description:**

According to the [wikidata
documentation](https://www.wikidata.org/wiki/Wikidata_talk:REST_API),
Wikibase REST API version 1 (stable) is released from November 11, 2024.
Their guide is to use the new v1 API and, it just requires replacing v0
in the routes with v1 in almost all cases.
So I replaced WIKIDATA_REST_API_URL from v0 to v1 for stable usage.

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-02-10 17:12:38 -08:00
ccurme
4a389ef4c6 community: fix extended testing (#29715)
v0.3.100 of premai sdk appears to break on import:
89d9276cbf/premai/api/__init__.py (L230)
2025-02-10 16:57:34 -08:00
Yoav Levy
af3f759073 docs: fixed nimble's provider page and retriever (#29695)
## **Description:**
- Added information about the retriever that Nimble's provider exposes.
- Fixed the authentication explanation on the retriever page.
2025-02-10 15:30:40 -08:00
453 changed files with 25129 additions and 17022 deletions

4
.github/CODEOWNERS vendored
View File

@@ -1,2 +1,2 @@
/.github/ @efriis @baskaryan @ccurme
/libs/packages.yml @efriis
/.github/ @baskaryan @ccurme
/libs/packages.yml @ccurme

View File

@@ -26,4 +26,4 @@ Additional guidelines:
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
If no one reviews your PR within a few days, please @-mention one of baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

View File

@@ -39,7 +39,6 @@ IGNORED_PARTNERS = [
PY_312_MAX_PACKAGES = [
"libs/partners/huggingface", # https://github.com/pytorch/pytorch/issues/130249
"libs/partners/pinecone",
"libs/partners/voyageai",
]

View File

@@ -6,6 +6,7 @@ on:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
python-version:
required: true
type: string
@@ -64,8 +65,6 @@ jobs:
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}

View File

@@ -12,6 +12,7 @@ on:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
default: 'libs/langchain'
dangerous-nonmaster-release:
required: false
@@ -100,15 +101,32 @@ jobs:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
PREV_TAG="$PKG_NAME==${VERSION%.*}.$(( ${VERSION##*.} - 1 ))"; [[ "${VERSION##*.}" -eq 0 ]] && PREV_TAG=""
# Handle regular versions and pre-release versions differently
if [[ "$VERSION" == *"-"* ]]; then
# This is a pre-release version (contains a hyphen)
# Extract the base version without the pre-release suffix
BASE_VERSION=${VERSION%%-*}
# Look for the latest release of the same base version
REGEX="^$PKG_NAME==$BASE_VERSION\$"
PREV_TAG=$(git tag --sort=-creatordate | (grep -P "$REGEX" || true) | head -1)
# If no exact base version match, look for the latest release of any kind
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
PREV_TAG=$(git tag --sort=-creatordate | (grep -P "$REGEX" || true) | head -1)
fi
else
# Regular version handling
PREV_TAG="$PKG_NAME==${VERSION%.*}.$(( ${VERSION##*.} - 1 ))"; [[ "${VERSION##*.}" -eq 0 ]] && PREV_TAG=""
# backup case if releasing e.g. 0.3.0, looks up last release
# note if last release (chronologically) was e.g. 0.1.47 it will get
# that instead of the last 0.2 release
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | (grep -P $REGEX || true) | head -1)
# backup case if releasing e.g. 0.3.0, looks up last release
# note if last release (chronologically) was e.g. 0.1.47 it will get
# that instead of the last 0.2 release
if [ -z "$PREV_TAG" ]; then
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | (grep -P $REGEX || true) | head -1)
fi
fi
# if PREV_TAG is empty, let it be empty
@@ -297,8 +315,6 @@ jobs:
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
@@ -314,12 +330,89 @@ jobs:
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
# Test select published packages against new core
test-prior-published-packages-against-new-core:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
runs-on: ubuntu-latest
strategy:
matrix:
partner: [openai, anthropic]
fail-fast: false # Continue testing other partners if one fails
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LEGACY_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
steps:
- uses: actions/checkout@v4
# We implement this conditional as Github Actions does not have good support
# for conditionally needing steps. https://github.com/actions/runner/issues/491
- name: Check if libs/core
run: |
if [ "${{ startsWith(inputs.working-directory, 'libs/core') }}" != "true" ]; then
echo "Not in libs/core. Exiting successfully."
exit 0
fi
- name: Set up Python + uv
if: startsWith(inputs.working-directory, 'libs/core')
uses: "./.github/actions/uv_setup"
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: actions/download-artifact@v4
if: startsWith(inputs.working-directory, 'libs/core')
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Test against ${{ matrix.partner }}
if: startsWith(inputs.working-directory, 'libs/core')
run: |
# Identify latest tag
LATEST_PACKAGE_TAG="$(
git ls-remote --tags origin "langchain-${{ matrix.partner }}*" \
| awk '{print $2}' \
| sed 's|refs/tags/||' \
| sort -Vr \
| head -n 1
)"
echo "Latest package tag: $LATEST_PACKAGE_TAG"
# Shallow-fetch just that single tag
git fetch --depth=1 origin tag "$LATEST_PACKAGE_TAG"
# Checkout the latest package files
rm -rf $GITHUB_WORKSPACE/libs/partners/${{ matrix.partner }}/*
cd $GITHUB_WORKSPACE/libs/partners/${{ matrix.partner }}
git checkout "$LATEST_PACKAGE_TAG" -- .
# Print as a sanity check
echo "Version number from pyproject.toml: "
cat pyproject.toml | grep "version = "
# Run tests
uv sync --group test --group test_integration
uv pip install ../../core/dist/*.whl
make integration_tests
publish:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
- test-prior-published-packages-against-new-core
runs-on: ubuntu-latest
permissions:
# This permission is used for trusted publishing:

View File

@@ -1,4 +1,3 @@
---
name: CI
on:

View File

@@ -1,4 +1,3 @@
---
name: Integration docs lint
on:

View File

@@ -1,4 +1,3 @@
---
name: CI / cd . / make spell_check
on:

View File

@@ -15,7 +15,7 @@ on:
env:
POETRY_VERSION: "1.8.4"
UV_FROZEN: "true"
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
DEFAULT_LIBS: '["libs/partners/openai", "libs/partners/anthropic", "libs/partners/fireworks", "libs/partners/groq", "libs/partners/mistralai", "libs/partners/xai", "libs/partners/google-vertexai", "libs/partners/google-genai", "libs/partners/aws"]'
POETRY_LIBS: ("libs/partners/google-vertexai" "libs/partners/google-genai" "libs/partners/aws")
jobs:
@@ -139,6 +139,7 @@ jobs:
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
HUGGINGFACEHUB_API_TOKEN: ${{ secrets.HUGGINGFACEHUB_API_TOKEN }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}

View File

@@ -97,12 +97,6 @@ repos:
entry: make -C libs/partners/openai format
files: ^libs/partners/openai/
pass_filenames: false
- id: pinecone
name: format partners/pinecone
language: system
entry: make -C libs/partners/pinecone format
files: ^libs/partners/pinecone/
pass_filenames: false
- id: prompty
name: format partners/prompty
language: system

View File

@@ -1,12 +1,7 @@
# Read the Docs configuration file
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
# Required
version: 2
formats:
- pdf
# Set the version of Python and other tools you might need
build:
os: ubuntu-22.04
@@ -15,15 +10,16 @@ build:
commands:
- mkdir -p $READTHEDOCS_OUTPUT
- cp -r api_reference_build/* $READTHEDOCS_OUTPUT
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/api_reference/conf.py
# If using Sphinx, optionally build your docs in additional formats such as PDF
# formats:
# - pdf
formats:
- pdf
# Optionally declare the Python requirements required to build your docs
python:
install:
- requirements: docs/api_reference/requirements.txt
- requirements: docs/api_reference/requirements.txt

View File

@@ -2,7 +2,6 @@
.EXPORT_ALL_VARIABLES:
UV_FROZEN = true
UV_NO_SYNC = true
## help: Show this help info.
help: Makefile
@@ -83,3 +82,6 @@ lint lint_package lint_tests:
format format_diff:
uv run --group lint ruff format docs cookbook
uv run --group lint ruff check --select I --fix docs cookbook
update-package-downloads:
uv run python docs/scripts/packages_yml_get_downloads.py

176
README.md
View File

@@ -1,6 +1,12 @@
# 🦜️🔗 LangChain
<picture>
<source media="(prefers-color-scheme: light)" srcset="docs/static/img/logo-dark.svg">
<source media="(prefers-color-scheme: dark)" srcset="docs/static/img/logo-light.svg">
<img alt="LangChain Logo" src="docs/static/img/logo-dark.svg" width="80%">
</picture>
⚡ Build context-aware reasoning applications ⚡
<div>
<br>
</div>
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
@@ -12,131 +18,65 @@
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).
To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
Fill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team.
## Quick Install
With pip:
LangChain is a framework for building LLM-powered applications. It helps you chain
together interoperable components and third-party integrations to simplify AI
application development — all while future-proofing decisions as the underlying
technology evolves.
```bash
pip install langchain
pip install -U langchain
```
With conda:
To learn more about LangChain, check out
[the docs](https://python.langchain.com/docs/introduction/). If youre looking for more
advanced customization or agent orchestration, check out
[LangGraph](https://langchain-ai.github.io/langgraph/), our framework for building
controllable agent workflows.
```bash
conda install langchain -c conda-forge
```
## Why use LangChain?
## 🤔 What is LangChain?
LangChain helps developers build applications powered by LLMs through a standard
interface for models, embeddings, vector stores, and more.
**LangChain** is a framework for developing applications powered by large language models (LLMs).
Use LangChain for:
- **Real-time data augmentation**. Easily connect LLMs to diverse data sources and
external / internal systems, drawing from LangChains vast library of integrations with
model providers, tools, vector stores, retrievers, and more.
- **Model interoperability**. Swap models in and out as your engineering team
experiments to find the best choice for your applications needs. As the industry
frontier evolves, adapt quickly — LangChains abstractions keep you moving without
losing momentum.
For these applications, LangChain simplifies the entire application lifecycle:
## LangChains ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly
with any LangChain product, giving developers a full suite of tools when building LLM
applications.
To improve your LLM application development, pair LangChain with:
- **Open-source libraries**: Build your applications using LangChain's open-source
[components](https://python.langchain.com/docs/concepts/) and
[third-party integrations](https://python.langchain.com/docs/integrations/providers/).
Use [LangGraph](https://langchain-ai.github.io/langgraph/) to build stateful agents with first-class streaming and human-in-the-loop support.
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://docs.smith.langchain.com/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Platform](https://langchain-ai.github.io/langgraph/cloud/).
- [LangSmith](http://www.langchain.com/langsmith) - Helpful for agent evals and
observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain
visibility in production, and improve performance over time.
- [LangGraph](https://langchain-ai.github.io/langgraph/) - Build agents that can
reliably handle complex tasks with LangGraph, our low-level agent orchestration
framework. LangGraph offers customizable architecture, long-term memory, and
human-in-the-loop workflows — and is trusted in production by companies like LinkedIn,
Uber, Klarna, and GitLab.
- [LangGraph Platform](https://langchain-ai.github.io/langgraph/concepts/#langgraph-platform) - Deploy
and scale agents effortlessly with a purpose-built deployment platform for long
running, stateful workflows. Discover, reuse, configure, and share agents across
teams — and iterate quickly with visual prototyping in
[LangGraph Studio](https://langchain-ai.github.io/langgraph/concepts/langgraph_studio/).
### Open-source libraries
- **`langchain-core`**: Base abstractions.
- **Integration packages** (e.g. **`langchain-openai`**, **`langchain-anthropic`**, etc.): Important integrations have been split into lightweight packages that are co-maintained by the LangChain team and the integration developers.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **`langchain-community`**: Third-party integrations that are community maintained.
- **[LangGraph](https://langchain-ai.github.io/langgraph)**: LangGraph powers production-grade agents, trusted by Linkedin, Uber, Klarna, GitLab, and many more. Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it. To learn more about LangGraph, check out our first LangChain Academy course, *Introduction to LangGraph*, available [here](https://academy.langchain.com/courses/intro-to-langgraph).
### Productionization:
- **[LangSmith](https://docs.smith.langchain.com/)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
### Deployment:
- **[LangGraph Platform](https://langchain-ai.github.io/langgraph/cloud/)**: Turn your LangGraph applications into production-ready APIs and Assistants.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_112024.svg#gh-light-mode-only "LangChain Architecture Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack_112024_dark.svg#gh-dark-mode-only "LangChain Architecture Overview")
## 🧱 What can you build with LangChain?
**❓ Question answering with RAG**
- [Documentation](https://python.langchain.com/docs/tutorials/rag/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
**🧱 Extracting structured output**
- [Documentation](https://python.langchain.com/docs/tutorials/extraction/)
- End-to-end Example: [LangChain Extract](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
- [Documentation](https://python.langchain.com/docs/tutorials/chatbot/)
- End-to-end Example: [Web LangChain (web researcher chatbot)](https://weblangchain.vercel.app) and [repo](https://github.com/langchain-ai/weblangchain)
And much more! Head to the [Tutorials](https://python.langchain.com/docs/tutorials/) section of the docs for more.
## 🚀 How does LangChain help?
The main value props of the LangChain libraries are:
1. **Components**: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not.
2. **Easy orchestration with LangGraph**: [LangGraph](https://langchain-ai.github.io/langgraph/),
built on top of `langchain-core`, has built-in support for [messages](https://python.langchain.com/docs/concepts/messages/), [tools](https://python.langchain.com/docs/concepts/tools/),
and other LangChain abstractions. This makes it easy to combine components into
production-ready applications with persistence, streaming, and other key features.
Check out the LangChain [tutorials page](https://python.langchain.com/docs/tutorials/#orchestration) for examples.
## Components
Components fall into the following **modules**:
**📃 Model I/O**
This includes [prompt management](https://python.langchain.com/docs/concepts/prompt_templates/)
and a generic interface for [chat models](https://python.langchain.com/docs/concepts/chat_models/), including a consistent interface for [tool-calling](https://python.langchain.com/docs/concepts/tool_calling/) and [structured output](https://python.langchain.com/docs/concepts/structured_outputs/) across model providers.
**📚 Retrieval**
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/concepts/document_loaders/) from a variety of sources, [preparing it](https://python.langchain.com/docs/concepts/text_splitters/), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/docs/concepts/retrievers/) it for use in the generation step.
**🤖 Agents**
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. [LangGraph](https://langchain-ai.github.io/langgraph/) makes it easy to use
LangChain components to build both [custom](https://langchain-ai.github.io/langgraph/tutorials/)
and [built-in](https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/)
LLM agents.
## 📖 Documentation
Please see [here](https://python.langchain.com) for full documentation, which includes:
- [Introduction](https://python.langchain.com/docs/introduction/): Overview of the framework and the structure of the docs.
- [Tutorials](https://python.langchain.com/docs/tutorials/): If you're looking to build something specific or are more of a hands-on learner, check out our tutorials. This is the best place to get started.
- [How-to guides](https://python.langchain.com/docs/how_to/): Answers to “How do I….?” type questions. These guides are goal-oriented and concrete; they're meant to help you complete a specific task.
- [Conceptual guide](https://python.langchain.com/docs/concepts/): Conceptual explanations of the key parts of the framework.
- [API Reference](https://python.langchain.com/api_reference/): Thorough documentation of every class and method.
## 🌐 Ecosystem
- [🦜🛠️ LangSmith](https://docs.smith.langchain.com/): Trace and evaluate your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://langchain-ai.github.io/langgraph/): Create stateful, multi-actor applications with LLMs. Integrates smoothly with LangChain, but can be used without it.
- [🦜🕸️ LangGraph Platform](https://langchain-ai.github.io/langgraph/concepts/#langgraph-platform): Deploy LLM applications built with LangGraph into production.
## 💁 Contributing
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it be in the form of a new feature, improved infrastructure, or better documentation.
For detailed information on how to contribute, see [here](https://python.langchain.com/docs/contributing/).
## 🌟 Contributors
[![langchain contributors](https://contrib.rocks/image?repo=langchain-ai/langchain&max=2000)](https://github.com/langchain-ai/langchain/graphs/contributors)
## Additional resources
- [Tutorials](https://python.langchain.com/docs/tutorials/): Simple walkthroughs with
guided examples on getting started with LangChain.
- [How-to Guides](https://python.langchain.com/docs/how_to/): Quick, actionable code
snippets for topics such as tool calling, RAG use cases, and more.
- [Conceptual Guides](https://python.langchain.com/docs/concepts/): Explanations of key
concepts behind the LangChain framework.
- [API Reference](https://python.langchain.com/api_reference/): Detailed reference on
navigating base packages and integrations for LangChain.

View File

@@ -21,7 +21,6 @@ Notebook | Description
[code-analysis-deeplake.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/code-analysis-deeplake.ipynb) | Analyze its own code base with the help of gpt and activeloop's deep lake.
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval.ipynb) | Build a custom agent that can interact with ai plugins by retrieving tools and creating natural language wrappers around openapi endpoints.
[custom_agent_with_plugin_retri...](https://github.com/langchain-ai/langchain/tree/master/cookbook/custom_agent_with_plugin_retrieval_using_plugnplai.ipynb) | Build a custom agent with plugin retrieval functionality, utilizing ai plugins from the `plugnplai` directory.
[databricks_sql_db.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/databricks_sql_db.ipynb) | Connect to databricks runtimes and databricks sql.
[deeplake_semantic_search_over_...](https://github.com/langchain-ai/langchain/tree/master/cookbook/deeplake_semantic_search_over_chat.ipynb) | Perform semantic search and question-answering over a group chat using activeloop's deep lake with gpt4.
[elasticsearch_db_qa.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/elasticsearch_db_qa.ipynb) | Interact with elasticsearch analytics databases in natural language and build search queries via the elasticsearch dsl API.
[extraction_openai_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/extraction_openai_tools.ipynb) | Structured Data Extraction with OpenAI Tools

View File

@@ -66,7 +66,7 @@
},
"outputs": [],
"source": [
"#!python3 -m pip install --upgrade langchain deeplake openai"
"#!python3 -m pip install --upgrade langchain langchain-deeplake openai"
]
},
{
@@ -666,89 +666,26 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Your Deep Lake dataset has been successfully created!\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
" \r"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Dataset(path='hub://adilkhan/langchain-code', tensors=['embedding', 'id', 'metadata', 'text'])\n",
"\n",
" tensor htype shape dtype compression\n",
" ------- ------- ------- ------- ------- \n",
" embedding embedding (8244, 1536) float32 None \n",
" id text (8244, 1) str None \n",
" metadata json (8244, 1) str None \n",
" text text (8244, 1) str None \n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": []
},
{
"data": {
"text/plain": [
"<langchain_community.vectorstores.deeplake.DeepLake at 0x7fe1b67d7a30>"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"from langchain_community.vectorstores import DeepLake\n",
"from langchain_deeplake.vectorstores import DeeplakeVectorStore\n",
"\n",
"username = \"<USERNAME_OR_ORG>\"\n",
"\n",
"\n",
"db = DeepLake.from_documents(\n",
" texts, embeddings, dataset_path=f\"hub://{username}/langchain-code\", overwrite=True\n",
"db = DeeplakeVectorStore.from_documents(\n",
" documents=texts,\n",
" embedding=embeddings,\n",
" dataset_path=f\"hub://{username}/langchain-code\",\n",
" overwrite=True,\n",
")\n",
"db"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"`Optional`: You can also use Deep Lake's Managed Tensor Database as a hosting service and run queries there. In order to do so, it is necessary to specify the runtime parameter as {'tensor_db': True} during the creation of the vector store. This configuration enables the execution of queries on the Managed Tensor Database, rather than on the client side. It should be noted that this functionality is not applicable to datasets stored locally or in-memory. In the event that a vector store has already been created outside of the Managed Tensor Database, it is possible to transfer it to the Managed Tensor Database by following the prescribed steps."
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"# from langchain_community.vectorstores import DeepLake\n",
"\n",
"# db = DeepLake.from_documents(\n",
"# texts, embeddings, dataset_path=f\"hub://{<org_id>}/langchain-code\", runtime={\"tensor_db\": True}\n",
"# )\n",
"# db"
]
},
{
"attachments": {},
"cell_type": "markdown",
@@ -760,24 +697,16 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": null,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Deep Lake Dataset in hub://adilkhan/langchain-code already exists, loading from the storage\n"
]
}
],
"outputs": [],
"source": [
"db = DeepLake(\n",
"db = DeeplakeVectorStore(\n",
" dataset_path=f\"hub://{username}/langchain-code\",\n",
" read_only=True,\n",
" embedding=embeddings,\n",
" embedding_function=embeddings,\n",
")"
]
},
@@ -796,36 +725,6 @@
"retriever.search_kwargs[\"k\"] = 20"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also specify user defined functions using [Deep Lake filters](https://docs.deeplake.ai/en/latest/deeplake.core.dataset.html#deeplake.core.dataset.Dataset.filter)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"def filter(x):\n",
" # filter based on source code\n",
" if \"something\" in x[\"text\"].data()[\"value\"]:\n",
" return False\n",
"\n",
" # filter based on path e.g. extension\n",
" metadata = x[\"metadata\"].data()[\"value\"]\n",
" return \"only_this\" in metadata[\"source\"] or \"also_that\" in metadata[\"source\"]\n",
"\n",
"\n",
"### turn on below for custom filtering\n",
"# retriever.search_kwargs['filter'] = filter"
]
},
{
"cell_type": "code",
"execution_count": 20,
@@ -837,10 +736,8 @@
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(\n",
" model_name=\"gpt-3.5-turbo-0613\"\n",
") # 'ada' 'gpt-3.5-turbo-0613' 'gpt-4',\n",
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
"model = ChatOpenAI(model=\"gpt-3.5-turbo-0613\") # 'ada' 'gpt-3.5-turbo-0613' 'gpt-4',\n",
"qa = RetrievalQA.from_llm(model, retriever=retriever)"
]
},
{

View File

@@ -1,273 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "707d13a7",
"metadata": {},
"source": [
"# Databricks\n",
"\n",
"This notebook covers how to connect to the [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the SQLDatabase wrapper of LangChain.\n",
"It is broken into 3 parts: installation and setup, connecting to Databricks, and examples."
]
},
{
"cell_type": "markdown",
"id": "0076d072",
"metadata": {},
"source": [
"## Installation and Setup"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "739b489b",
"metadata": {},
"outputs": [],
"source": [
"!pip install databricks-sql-connector"
]
},
{
"cell_type": "markdown",
"id": "73113163",
"metadata": {},
"source": [
"## Connecting to Databricks\n",
"\n",
"You can connect to [Databricks runtimes](https://docs.databricks.com/runtime/index.html) and [Databricks SQL](https://www.databricks.com/product/databricks-sql) using the `SQLDatabase.from_databricks()` method.\n",
"\n",
"### Syntax\n",
"```python\n",
"SQLDatabase.from_databricks(\n",
" catalog: str,\n",
" schema: str,\n",
" host: Optional[str] = None,\n",
" api_token: Optional[str] = None,\n",
" warehouse_id: Optional[str] = None,\n",
" cluster_id: Optional[str] = None,\n",
" engine_args: Optional[dict] = None,\n",
" **kwargs: Any)\n",
"```\n",
"### Required Parameters\n",
"* `catalog`: The catalog name in the Databricks database.\n",
"* `schema`: The schema name in the catalog.\n",
"\n",
"### Optional Parameters\n",
"There following parameters are optional. When executing the method in a Databricks notebook, you don't need to provide them in most of the cases.\n",
"* `host`: The Databricks workspace hostname, excluding 'https://' part. Defaults to 'DATABRICKS_HOST' environment variable or current workspace if in a Databricks notebook.\n",
"* `api_token`: The Databricks personal access token for accessing the Databricks SQL warehouse or the cluster. Defaults to 'DATABRICKS_TOKEN' environment variable or a temporary one is generated if in a Databricks notebook.\n",
"* `warehouse_id`: The warehouse ID in the Databricks SQL.\n",
"* `cluster_id`: The cluster ID in the Databricks Runtime. If running in a Databricks notebook and both 'warehouse_id' and 'cluster_id' are None, it uses the ID of the cluster the notebook is attached to.\n",
"* `engine_args`: The arguments to be used when connecting Databricks.\n",
"* `**kwargs`: Additional keyword arguments for the `SQLDatabase.from_uri` method."
]
},
{
"cell_type": "markdown",
"id": "b11c7e48",
"metadata": {},
"source": [
"## Examples"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "8102bca0",
"metadata": {},
"outputs": [],
"source": [
"# Connecting to Databricks with SQLDatabase wrapper\n",
"from langchain_community.utilities import SQLDatabase\n",
"\n",
"db = SQLDatabase.from_databricks(catalog=\"samples\", schema=\"nyctaxi\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9dd36f58",
"metadata": {},
"outputs": [],
"source": [
"# Creating a OpenAI Chat LLM wrapper\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0, model_name=\"gpt-4\")"
]
},
{
"cell_type": "markdown",
"id": "5b5c5f1a",
"metadata": {},
"source": [
"### SQL Chain example\n",
"\n",
"This example demonstrates the use of the [SQL Chain](https://python.langchain.com/en/latest/modules/chains/examples/sqlite.html) for answering a question over a Databricks database."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "36f2270b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.utilities import SQLDatabaseChain\n",
"\n",
"db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "4e2b5f25",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new SQLDatabaseChain chain...\u001b[0m\n",
"What is the average duration of taxi rides that start between midnight and 6am?\n",
"SQLQuery:\u001b[32;1m\u001b[1;3mSELECT AVG(UNIX_TIMESTAMP(tpep_dropoff_datetime) - UNIX_TIMESTAMP(tpep_pickup_datetime)) as avg_duration\n",
"FROM trips\n",
"WHERE HOUR(tpep_pickup_datetime) >= 0 AND HOUR(tpep_pickup_datetime) < 6\u001b[0m\n",
"SQLResult: \u001b[33;1m\u001b[1;3m[(987.8122786304605,)]\u001b[0m\n",
"Answer:\u001b[32;1m\u001b[1;3mThe average duration of taxi rides that start between midnight and 6am is 987.81 seconds.\u001b[0m\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The average duration of taxi rides that start between midnight and 6am is 987.81 seconds.'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"db_chain.run(\n",
" \"What is the average duration of taxi rides that start between midnight and 6am?\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e496d5e5",
"metadata": {},
"source": [
"### SQL Database Agent example\n",
"\n",
"This example demonstrates the use of the [SQL Database Agent](/docs/integrations/tools/sql_database) for answering questions over a Databricks database."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "9918e86a",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import create_sql_agent\n",
"from langchain_community.agent_toolkits import SQLDatabaseToolkit\n",
"\n",
"toolkit = SQLDatabaseToolkit(db=db, llm=llm)\n",
"agent = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c484a76e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mAction: list_tables_sql_db\n",
"Action Input: \u001b[0m\n",
"Observation: \u001b[38;5;200m\u001b[1;3mtrips\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI should check the schema of the trips table to see if it has the necessary columns for trip distance and duration.\n",
"Action: schema_sql_db\n",
"Action Input: trips\u001b[0m\n",
"Observation: \u001b[33;1m\u001b[1;3m\n",
"CREATE TABLE trips (\n",
"\ttpep_pickup_datetime TIMESTAMP, \n",
"\ttpep_dropoff_datetime TIMESTAMP, \n",
"\ttrip_distance FLOAT, \n",
"\tfare_amount FLOAT, \n",
"\tpickup_zip INT, \n",
"\tdropoff_zip INT\n",
") USING DELTA\n",
"\n",
"/*\n",
"3 rows from trips table:\n",
"tpep_pickup_datetime\ttpep_dropoff_datetime\ttrip_distance\tfare_amount\tpickup_zip\tdropoff_zip\n",
"2016-02-14 16:52:13+00:00\t2016-02-14 17:16:04+00:00\t4.94\t19.0\t10282\t10171\n",
"2016-02-04 18:44:19+00:00\t2016-02-04 18:46:00+00:00\t0.28\t3.5\t10110\t10110\n",
"2016-02-17 17:13:57+00:00\t2016-02-17 17:17:55+00:00\t0.7\t5.0\t10103\t10023\n",
"*/\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe trips table has the necessary columns for trip distance and duration. I will write a query to find the longest trip distance and its duration.\n",
"Action: query_checker_sql_db\n",
"Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001b[0m\n",
"Observation: \u001b[31;1m\u001b[1;3mSELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mThe query is correct. I will now execute it to find the longest trip distance and its duration.\n",
"Action: query_sql_db\n",
"Action Input: SELECT trip_distance, tpep_dropoff_datetime - tpep_pickup_datetime as duration FROM trips ORDER BY trip_distance DESC LIMIT 1\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3m[(30.6, '0 00:43:31.000000000')]\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3mI now know the final answer.\n",
"Final Answer: The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"'The longest trip distance is 30.6 miles and it took 43 minutes and 31 seconds.'"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent.run(\"What is the longest trip distance and how long did it take?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -115,7 +115,7 @@
"\n",
"PROMPT_TEMPLATE = \"\"\"Given an input question, create a syntactically correct Elasticsearch query to run. Unless the user specifies in their question a specific number of examples they wish to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.\n",
"\n",
"Unless told to do not query for all the columns from a specific index, only ask for a the few relevant columns given the question.\n",
"Unless told to do not query for all the columns from a specific index, only ask for a few relevant columns given the question.\n",
"\n",
"Pay attention to use only the column names that you can see in the mapping description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which index. Return the query as valid json.\n",
"\n",

View File

@@ -21,40 +21,6 @@
"* Passing raw images and text chunks to a multimodal LLM for answer synthesis "
]
},
{
"cell_type": "markdown",
"id": "6a6b6e73",
"metadata": {},
"source": [
"## Start VDMS Server\n",
"\n",
"Let's start a VDMS docker using port 55559 instead of default 55555. \n",
"Keep note of the port and hostname as this is needed for the vector store as it uses the VDMS Python client to connect to the server."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "5f483872",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"a1b9206b08ef626e15b356bf9e031171f7c7eb8f956a2733f196f0109246fe2b\n"
]
}
],
"source": [
"! docker run --rm -d -p 55559:55555 --name vdms_rag_nb intellabs/vdms:latest\n",
"\n",
"# Connect to VDMS Vector Store\n",
"from langchain_community.vectorstores.vdms import VDMS_Client\n",
"\n",
"vdms_client = VDMS_Client(port=55559)"
]
},
{
"cell_type": "markdown",
"id": "2498a0a1",
@@ -67,20 +33,20 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "febbc459-ebba-4c1a-a52b-fed7731593f8",
"metadata": {},
"outputs": [],
"source": [
"! pip install --quiet -U vdms langchain-experimental\n",
"! pip install --quiet -U langchain-vdms langchain-experimental langchain-ollama\n",
"\n",
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
"! pip install --quiet pdf2image \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml open_clip_torch"
"! pip install --quiet pdf2image \"unstructured[all-docs]==0.10.19\" \"onnxruntime==1.17.0\" pillow pydantic lxml open_clip_torch"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"id": "78ac6543",
"metadata": {},
"outputs": [],
@@ -89,6 +55,40 @@
"# load_dotenv(find_dotenv(), override=True);"
]
},
{
"cell_type": "markdown",
"id": "e5c8916e",
"metadata": {},
"source": [
"## Start VDMS Server\n",
"\n",
"Let's start a VDMS docker using port 55559 instead of default 55555. \n",
"Keep note of the port and hostname as this is needed for the vector store as it uses the VDMS Python client to connect to the server."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "1e6e2c15",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"a701e5ac3523006e9540b5355e2d872d5d78383eab61562a675d5b9ac21fde65\n"
]
}
],
"source": [
"! docker run --rm -d -p 55559:55555 --name vdms_rag_nb intellabs/vdms:latest\n",
"\n",
"# Connect to VDMS Vector Store\n",
"from langchain_vdms.vectorstores import VDMS_Client\n",
"\n",
"vdms_client = VDMS_Client(port=55559)"
]
},
{
"cell_type": "markdown",
"id": "1e94b3fb-8e3e-4736-be0a-ad881626c7bd",
@@ -115,11 +115,12 @@
"import requests\n",
"\n",
"# Folder to store pdf and extracted images\n",
"datapath = Path(\"./data/multimodal_files\").resolve()\n",
"base_datapath = Path(\"./data/multimodal_files\").resolve()\n",
"datapath = base_datapath / \"images\"\n",
"datapath.mkdir(parents=True, exist_ok=True)\n",
"\n",
"pdf_url = \"https://www.loc.gov/lcm/pdf/LCM_2020_1112.pdf\"\n",
"pdf_path = str(datapath / pdf_url.split(\"/\")[-1])\n",
"pdf_path = str(base_datapath / pdf_url.split(\"/\")[-1])\n",
"with open(pdf_path, \"wb\") as f:\n",
" f.write(requests.get(pdf_url).content)"
]
@@ -185,8 +186,8 @@
"source": [
"import os\n",
"\n",
"from langchain_community.vectorstores import VDMS\n",
"from langchain_experimental.open_clip import OpenCLIPEmbeddings\n",
"from langchain_vdms import VDMS\n",
"\n",
"# Create VDMS\n",
"vectorstore = VDMS(\n",
@@ -312,10 +313,10 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.llms.ollama import Ollama\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnableLambda, RunnablePassthrough\n",
"from langchain_ollama.llms import OllamaLLM\n",
"\n",
"\n",
"def prompt_func(data_dict):\n",
@@ -340,8 +341,8 @@
" \"As an expert art critic and historian, your task is to analyze and interpret images, \"\n",
" \"considering their historical and cultural significance. Alongside the images, you will be \"\n",
" \"provided with related text to offer context. Both will be retrieved from a vectorstore based \"\n",
" \"on user-input keywords. Please convert answers to english and use your extensive knowledge \"\n",
" \"and analytical skills to provide a comprehensive summary that includes:\\n\"\n",
" \"on user-input keywords. Please use your extensive knowledge and analytical skills to provide a \"\n",
" \"comprehensive summary that includes:\\n\"\n",
" \"- A detailed description of the visual elements in the image.\\n\"\n",
" \"- The historical and cultural context of the image.\\n\"\n",
" \"- An interpretation of the image's symbolism and meaning.\\n\"\n",
@@ -359,7 +360,7 @@
" \"\"\"Multi-modal RAG chain\"\"\"\n",
"\n",
" # Multi-modal LLM\n",
" llm_model = Ollama(\n",
" llm_model = OllamaLLM(\n",
" verbose=True, temperature=0.5, model=\"llava\", base_url=\"http://localhost:11434\"\n",
" )\n",
"\n",
@@ -419,6 +420,121 @@
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"© 2017 LARRY D. MOORE\n",
"\n",
"contemporary criticism of the less-than- thoughtful circumstances under which Lange photographed Thomson, the pictures power to engage has not diminished. Artists in other countries have appropriated the image, changing the mothers features into those of other ethnicities, but keeping her expression and the positions of her clinging children. Long after anyone could help the Thompson family, this picture has resonance in another time of national crisis, unemployment and food shortages.\n",
"\n",
"A striking, but very different picture is a 1900 portrait of the legendary Hin-mah-too-yah- lat-kekt (Chief Joseph) of the Nez Percé people. The Bureau of American Ethnology in Washington, D.C., regularly arranged for its photographer, De Lancey Gill, to photograph Native American delegations that came to the capital to confer with officials about tribal needs and concerns. Although Gill described Chief Joseph as having “an air of gentleness and quiet reserve,” the delegate skeptically appraises the photographer, which is not surprising given that the United States broke five treaties with Chief Joseph and his father between 1855 and 1885.\n",
"\n",
"More than a glance, second looks may reveal new knowledge into complex histories.\n",
"\n",
"Anne Wilkes Tucker is the photography curator emeritus of the Museum of Fine Arts, Houston and curator of the “Not an Ostrich” exhibition.\n",
"\n",
"28\n",
"\n",
"28 LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"THEYRE WILLING TO HAVE MEENTERTAIN THEM DURING THE DAY,BUT AS SOON AS IT STARTSGETTING DARK, THEY ALLGO OFF, AND LEAVE ME! \n",
"ROSA PARKS: IN HER OWN WORDS\n",
"\n",
"COMIC ART: 120 YEARS OF PANELS AND PAGES\n",
"\n",
"SHALL NOT BE DENIED: WOMEN FIGHT FOR THE VOTE\n",
"\n",
"More information loc.gov/exhibits\n",
"Nuestra Sefiora de las Iguanas\n",
"\n",
"Graciela Iturbides 1979 portrait of Zobeida Díaz in the town of Juchitán in southeastern Mexico conveys the strength of women and reflects their important contributions to the economy. Díaz, a merchant, was selling iguanas to cook and eat, carrying them on her head, as is customary.\n",
"\n",
"GRACIELA ITURBIDE. “NUESTRA SEÑORA DE LAS IGUANAS.” 1979. GELATIN SILVER PRINT. © GRACIELA ITURBIDE, USED BY PERMISSION. PRINTS AND PHOTOGRAPHS DIVISION.\n",
"\n",
"Iturbide requested permission to take a photograph, but this proved challenging because the iguanas were constantly moving, causing Díaz to laugh. The result, however, was a brilliant portrait that the inhabitants of Juchitán claimed with pride. They have reproduced it on posters and erected a statue honoring Díaz and her iguanas. The photo now appears throughout the world, inspiring supporters of feminism, womens rights and gender equality.\n",
"\n",
"—Adam Silvia is a curator in the Prints and Photographs Division.\n",
"\n",
"6\n",
"\n",
"6 LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"Migrant Mother is Florence Owens Thompson\n",
"\n",
"The iconic portrait that became the face of the Great Depression is also the most famous photograph in the collections of the Library of Congress.\n",
"\n",
"The Library holds the original source of the photo — a nitrate negative measuring 4 by 5 inches. Do you see a faint thumb in the bottom right? The photographer, Dorothea Lange, found the thumb distracting and after a few years had the negative altered to make the thumb almost invisible. Langes boss at the Farm Security Administration, Roy Stryker, criticized her action because altering a negative undermines the credibility of a documentary photo.\n",
"Shrimp Picker\n",
"\n",
"The photos and evocative captions of Lewis Hine served as source material for National Child Labor Committee reports and exhibits exposing abusive child labor practices in the United States in the first decades of the 20th century.\n",
"\n",
"LEWIS WICKES HINE. “MANUEL, THE YOUNG SHRIMP-PICKER, FIVE YEARS OLD, AND A MOUNTAIN OF CHILD-LABOR OYSTER SHELLS BEHIND HIM. HE WORKED LAST YEAR. UNDERSTANDS NOT A WORD OF ENGLISH. DUNBAR, LOPEZ, DUKATE COMPANY. LOCATION: BILOXI, MISSISSIPPI.” FEBRUARY 1911. NATIONAL CHILD LABOR COMMITTEE COLLECTION. PRINTS AND PHOTOGRAPHS DIVISION.\n",
"\n",
"For 15 years, Hine\n",
"\n",
"crisscrossed the country, documenting the practices of the worst offenders. His effective use of photography made him one of the committee's greatest publicists in the campaign for legislation to ban child labor.\n",
"\n",
"Hine was a master at taking photos that catch attention and convey a message and, in this photo, he framed Manuel in a setting that drove home the boys small size and unsafe environment.\n",
"\n",
"Captions on photos of other shrimp pickers emphasized their long working hours as well as one hazard of the job: The acid from the shrimp made pickers hands sore and “eats the shoes off your feet.”\n",
"\n",
"Such images alerted viewers to all that workers, their families and the nation sacrificed when children were part of the labor force. The Library holds paper records of the National Child Labor Committee as well as over 5,000 photographs.\n",
"\n",
"—Barbara Natanson is head of the Reference Section in the Prints and Photographs Division.\n",
"\n",
"8\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"Intergenerational Portrait\n",
"\n",
"Raised on the Apsáalooke (Crow) reservation in Montana, photographer Wendy Red Star created her “Apsáalooke Feminist” self-portrait series with her daughter Beatrice. With a dash of wry humor, mother and daughter are their own first-person narrators.\n",
"\n",
"Red Star explains the significance of their appearance: “The dress has power: You feel strong and regal wearing it. In my art, the elk tooth dress specifically symbolizes Crow womanhood and the matrilineal line connecting me to my ancestors. As a mother, I spend hours searching for the perfect elk tooth dress materials to make a prized dress for my daughter.”\n",
"\n",
"In a world that struggles with cultural identities, this photograph shows us the power and beauty of blending traditional and contemporary styles.\n",
"American Gothic Product #216040262 Price: $24\n",
"\n",
"U.S. Capitol at Night Product #216040052 Price: $24\n",
"\n",
"Good Reading Ahead Product #21606142 Price: $24\n",
"\n",
"Gordon Parks created an iconic image with this 1942 photograph of cleaning woman Ella Watson.\n",
"\n",
"Snow blankets the U.S. Capitol in this classic image by Ernest L. Crandall.\n",
"\n",
"Start your new year out right with a poster promising good reading for months to come.\n",
"\n",
"▪ Order online: loc.gov/shop ▪ Order by phone: 888.682.3557\n",
"\n",
"26\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"LIBRARY OF CONGRESS MAGAZINE\n",
"\n",
"SUPPORT\n",
"\n",
"A PICTURE OF PHILANTHROPY Annenberg Foundation Gives $1 Million and a Photographic Collection to the Library.\n",
"\n",
"A major gift by Wallis Annenberg and the Annenberg Foundation in Los Angeles will support the effort to reimagine the visitor experience at the Library of Congress. The foundation also is donating 1,000 photographic prints from its Annenberg Space for Photography exhibitions to the Library.\n",
"\n",
"The Library is pursuing a multiyear plan to transform the experience of its nearly 2 million annual visitors, share more of its treasures with the public and show how Library collections connect with visitors own creativity and research. The project is part of a strategic plan established by Librarian of Congress Carla Hayden to make the Library more user-centered for Congress, creators and learners of all ages.\n",
"\n",
"A 2018 exhibition at the Annenberg Space for Photography in Los Angeles featured over 400 photographs from the Library. The Library is planning a future photography exhibition, based on the Annenberg-curated show, along with a documentary film on the Library and its history, produced by the Annenberg Space for Photography.\n",
"\n",
"“The nations library is honored to have the strong support of Wallis Annenberg and the Annenberg Foundation as we enhance the experience for our visitors,” Hayden said. “We know that visitors will find new connections to the Library through the incredible photography collections and countless other treasures held here to document our nations history and creativity.”\n",
"\n",
"To enhance the Librarys holdings, the foundation is giving the Library photographic prints for long-term preservation from 10 other exhibitions hosted at the Annenberg Space for Photography. The Library holds one of the worlds largest photography collections, with about 14 million photos and over 1 million images digitized and available online.\n",
"18 LIBRARY OF CONGRESS MAGAZINE\n"
]
}
],
"source": [
@@ -461,10 +577,17 @@
"name": "stdout",
"output_type": "stream",
"text": [
" The image depicts a woman with several children. The woman appears to be of Cherokee heritage, as suggested by the text provided. The image is described as having been initially regretted by the subject, Florence Owens Thompson, due to her feeling that it did not accurately represent her leadership qualities.\n",
"The historical and cultural context of the image is tied to the Great Depression and the Dust Bowl, both of which affected the Cherokee people in Oklahoma. The photograph was taken during this period, and its subject, Florence Owens Thompson, was a leader within her community who worked tirelessly to help those affected by these crises.\n",
"The image's symbolism and meaning can be interpreted as a representation of resilience and strength in the face of adversity. The woman is depicted with multiple children, which could signify her role as a caregiver and protector during difficult times.\n",
"Connections between the image and the related text include Florence Owens Thompson's leadership qualities and her regretted feelings about the photograph. Additionally, the mention of Dorothea Lange, the photographer who took this photo, ties the image to its historical context and the broader narrative of the Great Depression and Dust Bowl in Oklahoma. \n"
" The image is a black and white photograph by Dorothea Lange titled \"Destitute Pea Pickers in California. Mother of Seven Children. Age Thirty-Two. Nipomo, California.\" It was taken in March 1936 as part of the Farm Security Administration-Office of War Information Collection.\n",
"\n",
"The photograph features a woman with seven children, who appear to be in a state of poverty and hardship. The woman is seated, looking directly at the camera, while three of her children are standing behind her. They all seem to be dressed in ragged clothing, indicative of their impoverished condition.\n",
"\n",
"The historical context of this image is related to the Great Depression, which was a period of economic hardship in the United States that lasted from 1929 to 1939. During this time, many people struggled to make ends meet, and poverty was widespread. This photograph captures the plight of one such family during this difficult period.\n",
"\n",
"The symbolism of the image is multifaceted. The woman's direct gaze at the camera can be seen as a plea for help or an expression of desperation. The ragged clothing of the children serves as a stark reminder of the poverty and hardship experienced by many during this time.\n",
"\n",
"In terms of connections to the related text, it is mentioned that Florence Owens Thompson, the woman in the photograph, initially regretted having her picture taken. However, she later came to appreciate the importance of the image as a representation of the struggles faced by many during the Great Depression. The mention of Helena Zinkham suggests that she may have played a role in the creation or distribution of this photograph.\n",
"\n",
"Overall, this image is a powerful depiction of poverty and hardship during the Great Depression, capturing the resilience and struggles of one family amidst difficult times. \n"
]
}
],
@@ -491,11 +614,17 @@
"source": [
"! docker kill vdms_rag_nb"
]
},
{
"cell_type": "markdown",
"id": "fe4a98ee",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".langchain-venv",
"display_name": ".test-venv",
"language": "python",
"name": "python3"
},
@@ -509,7 +638,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.11.10"
}
},
"nbformat": 4,

View File

@@ -233,7 +233,7 @@ Question: {input}"""
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.
Never query for all the columns from a specific table, only ask for a the few relevant columns given the question.
Never query for all the columns from a specific table, only ask for a few relevant columns given the question.
Pay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.

View File

@@ -26,7 +26,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"2e44b44201c8778b462342ac97f5ccf05a4e02aa8a04505ecde97bf20dcc4cbb\n"
"76e78b89cee4d6d31154823f93592315df79c28410dfbfc87c9f70cbfdfa648b\n"
]
}
],
@@ -49,7 +49,7 @@
"metadata": {},
"outputs": [],
"source": [
"! pip install --quiet -U vdms langchain-experimental sentence-transformers opencv-python open_clip_torch torch accelerate"
"! pip install --quiet -U langchain-vdms langchain-experimental sentence-transformers opencv-python open_clip_torch torch accelerate"
]
},
{
@@ -63,7 +63,16 @@
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/data1/cwlacewe/apps/cwlacewe_langchain/.langchain-venv/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
" from .autonotebook import tqdm as notebook_tqdm\n"
]
}
],
"source": [
"import json\n",
"import os\n",
@@ -80,10 +89,10 @@
"from langchain_community.embeddings.sentence_transformer import (\n",
" SentenceTransformerEmbeddings,\n",
")\n",
"from langchain_community.vectorstores.vdms import VDMS, VDMS_Client\n",
"from langchain_core.callbacks.manager import CallbackManagerForLLMRun\n",
"from langchain_core.runnables import ConfigurableField\n",
"from langchain_experimental.open_clip import OpenCLIPEmbeddings\n",
"from langchain_vdms.vectorstores import VDMS, VDMS_Client\n",
"from transformers import (\n",
" AutoModelForCausalLM,\n",
" AutoTokenizer,\n",
@@ -363,7 +372,7 @@
"\t\tThere are 2 shoppers in this video. Shopper 1 is wearing a plaid shirt and a spectacle. Shopper 2 who is not completely captured in the frame seems to wear a black shirt and is moving away with his back turned towards the camera. There is a shelf towards the right of the camera frame. Shopper 2 is hanging an item back to a hanger and then quickly walks away in a similar fashion as shopper 2. Contents of the nearer side of the shelf with respect to camera seems to be camping lanterns and cleansing agents, arranged at the top. In the middle part of the shelf, various tools including grommets, a pocket saw, candles, and other helpful camping items can be observed. Midway through the shelf contains items which appear to be steel containers and items made up of plastic with red, green, orange, and yellow colors, while those at the bottom are packed in cardboard boxes. Contents at the farther part of the shelf are well stocked and organized but are not glaringly visible.\n",
"\n",
"\tMetadata:\n",
"\t\t{'fps': 24.0, 'id': 'c6e5f894-b905-46f5-ac9e-4487a9235561', 'total_frames': 120.0, 'video': 'clip16.mp4'}\n",
"\t\t{'fps': 24.0, 'total_frames': 120.0, 'video': 'clip16.mp4'}\n",
"Retrieved Top matching video!\n",
"\n",
"\n"
@@ -392,18 +401,12 @@
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "3edf8783e114487ca490d8dec5c46884",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
"name": "stderr",
"output_type": "stream",
"text": [
"Loading checkpoint shards: 100%|██████████| 2/2 [00:18<00:00, 9.01s/it]\n",
"WARNING:accelerate.big_modeling:Some parameters are on the meta device because they were offloaded to the cpu.\n"
]
}
],
"source": [
@@ -555,7 +558,7 @@
"\t\tA single shopper is seen in this video standing facing the shelf and in the bottom part of the frame. He's wearing a light-colored shirt and a spectacle. The shopper is carrying a red colored basket in his left hand. The entire basket is not clearly visible, but it does seem to contain something in a blue colored package which the shopper has just placed in the basket given his right hand was seen inside the basket. Then the shopper leans towards the shelf and checks out an item in orange package. He picks this single item with his right hand and proceeds to place the item in the basket. The entire shelf looks well stocked except for the top part of the shelf which is empty. The shopper has not picked any item from this part of the shelf. The rest of the shelf looks well stocked and does not need any restocking. The contents on the farther part of the shelf consists of items, majority of which are packed in black, yellow, and green packages. No other details are visible of these items.\n",
"\n",
"\tMetadata:\n",
"\t\t{'fps': 24.0, 'id': '37ddc212-994e-4db0-877f-5ed09965ab90', 'total_frames': 162.0, 'video': 'clip10.mp4'}\n",
"\t\t{'fps': 24.0, 'total_frames': 162.0, 'video': 'clip10.mp4'}\n",
"Retrieved Top matching video!\n",
"\n",
"\n"
@@ -585,7 +588,7 @@
"User : Find a man holding a red shopping basket\n",
"Assistant : Most relevant retrieved video is **clip9.mp4** \n",
"\n",
"I see a person standing in front of a well-stocked shelf, they are wearing a light-colored shirt and glasses, and they have a red shopping basket in their left hand. They are leaning forward and picking up an item from the shelf with their right hand. The item is packaged in a blue-green box. Based on the scene description, I can confirm that the person is indeed holding a red shopping basket.</s>\n"
"I see a person standing in front of a well-stocked shelf, they are wearing a light-colored shirt and glasses, and they have a red shopping basket in their left hand. They are leaning forward and picking up an item from the shelf with their right hand. The item is packaged in a blue-green box. Based on the available information, I cannot confirm whether the basket is empty or contains items. However, the rest of the\n"
]
}
],
@@ -655,7 +658,7 @@
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": ".langchain-venv",
"language": "python",
"name": "python3"
},
@@ -669,7 +672,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.11.10"
}
},
"nbformat": 4,

View File

@@ -328,7 +328,7 @@ html[data-theme=dark] .MathJax_SVG * {
}
.bd-sidebar-primary {
width: 22%; /* Adjust this value to your preference */
width: max-content; /* Adjust this value to your preference */
line-height: 1.4;
}

View File

@@ -0,0 +1 @@
eNrtVnl0E3Ueb4GtgAesT1dQjpgVdbWTziSTs+RBD5IW0jNJL4p1MvNLM81cnaNtWrpWLD4flC0p4AG7rpTSYFsK2AJySlFXFFRA0aUooOyKBy5KQURQ9pc0peWh7/n28d66+8wfyWR+3+PzPX+feaEKIEo0z8V20JwMRIKU4R+paV5IBOUKkOT6VhbIPp5qyc5yulYpIn34AZ8sC5IlIYEQaA0vAI6gNSTPJlRgCaSPkBPgs8CAiJkWD08FemNTatQskCSiFEhqy+waNclDV5ystqjzocJ9kkr2AVUlIOCPqKI5VTIvyTw3TR2vFnkGQDFFAqK6dk68muUpwMAXpYKM4DzC0hwNpSRZBASrtngJRgLxahmwAoxEVkSoi2pQ+IbnmX7XckAIG/QqXCRQqHzl0VKj5gg2fFoK5JIoHChAAYkUaaFfRm0H8lC4GiggECLUg8mTwjYEEeZElGkQ+cfwJDFgPeoboqW5UnVtLQwP5pgWAQWhDUrCMKOSvKcMkDKUrJ1TG/IBgoIuGlt8MDvBzquTv44gSQBzAjiSp6D14NrSalqIV1HAyxAyaIMZ50AkzGCbHwABIRi6ArT2awXXE4LA0P3uE8oknuuIVggJA7n2uC1cDwSWk5OD3VkQRFJ6QnYAdgmnwjQGTIOtr0IkmaA5BlYdYQiIp1WInG8beiAQpB8aQaIdGGztV+4cKsNLwdUZBJnlvMokIZK+4GpCZA1419D3osLJNAuCoZTsa91FDwfd6TSYVmPacJVhKcCRwdWRRtp8lTKQxQBC8tBGcCXaOZAfBnClsi+4SqfVrRGBJMCeB4+1QjVZkea1wFqAfXtC0d5vzpo1UMSjMXe0pMK6BHe4fEq8CjWoMghRpUW1ehVmsOh0FhxX2TNcHSlRN64fLcMGl0hwkheWYsZA2UOkT+H8gGpL+dGC7wgXHEYThg9HCwFVAi8BJIoq2FGA5PZPPZKe2tXfXQgvlhIcXR1xG9wRqXxldVUlRSoU5auoZFFzNa6jPUAhvd1RFTgCYTcQEMJKwVW4Tt8ZPRnIfRuMFUUwFEGxrVUInFXA0CwN8xn5jq4eKdiiR1H0xWsFZN4POCkYwtHIZ+dQCRGwsGhh34NmcLPZvP3HhQZM6aCI2ajferWUBIaiwbSs9OK1AlETzajUUTUgjdBU8PA98E8JoaXMHr0eJylCZzR4dVrMRGBmwmPEUEyvA8SW8D4goZVwMQVelBEJkHDPyoHg4XiWqArPmVUHRQ0w0kS4HklGoYBT8aTy4RikRJUgAoYnqHWkFyEJ0geQ/v4LhlILM5My0lPanBBkCs/7adDUGzuupIT0lnhYK5UXKChnUhiHO60cNYvJQoEsukRjEup345WEwNJYSpqU5EzPsfMIZsQxrdFk0uIIpkE1cEoRh8dv0uWSdjTfp+NLAnaz35lRaMygdTPNmWhBmZMRGCXTpfHmutOqzclV2pmObHNWkjbXVpXkypNmZmp0eHlmJenT0kwuhuL+VNLuNThmCTnp2Q5TmQE3G4n8HFdWieyHUQtw2VoTElWwYeG+lKzRsUHg2CDhoTFa0IGhSVRRkcRYNVevyERVGry3sjgmkKhyhjMM4C/c205aBtZMngOHl8LEKBU0Zc0kst0FAb3bnWVMwryluRTvtpFKURrhChQVlZXn8YYkD12ur/R4MoZkxmjEETSaHAOKmyKtOQj9P0S1qQAZugWQrMhFBIvL8RJHe72tTiDCqQq2kQyvUHDbi6A1xYbkJhUGu80YqcMxwmMwGSicBFokGe7RAWtXdkZL+KoIEQxsvAoy2OXTWdUwlTp1ooolrCYDnLHINf5oa//F9Wrs95MXjoyJfIY3OHP4FejYP57MLzg/4d7krulVs7ue722zyssMa/fWF0t1iWTNpvTZMy6cXnJ3bBM4WDX9vdrztZXnD2++cWTsQ9NtdYfe3WBb+JUl4zS//Ejo3N6LzT98c+GdD/s+fODW+FtnT3kP7bvp3Lf1+yYW8k+XdU9/bdkpwrvYcratj8Ial+CL7pi4f82w+SF37/wbbzcUn0H/9PGMcVM+mmpd7391fNGePY11Y5MrPjx2/g83zn20IbWLbEny0/MX35xeX5ez215XtHftqoM9zIhnmm5Zzc/ZlOyICS3dM/52T/wHj2x7PYAvHB1cN21u+iPH+iqPbT7w/pEzJ777W+ekT1NWOD4/d3DdVv8aZspdjuXHU6YmPmgbmdYrfzB6zg8N7x7yvfVpfYzefeQ5eafyd3b6/bYdx+NeuLn4wS8ulY+5r7tW+OyS9PETPS9sdE3osLxy/rHzye7eNfYa75SlxeLsk3HPf2lbbrlrQruiPGfdMnW/Td/7z8zvkz8Dd6QnjD5x8c6eC0eGf7p0lach4cTh32zcPt0eqNz8suRZeqigYd/XhQc2HS8bNmLJ65e5T5oqplmfOnR5T2MMuWbi77YYCoQKy1dPe/fzx146GNe47fGe5sPi41tO+C6xtxUczDubGPvNIQtzpmnv6EWG7bNvul+ZhCxpr+/sbJ4y4uLxMbDOly8Pj1m499ZX7h0RE3M9meGwnOvDDOOHaHIKw1w5JuAtBLchfN9PAktIgvlJJkhDHqYOC5SkzMg1lTvppMIKNG+WNz3FLpmLiCLF+HPoIiGWKixEAr2oa4qv0LpitUVVrO7HX6yuVYc53VDY6vRwuJLCcQHNYHhhzEOhl/wMjL/y5F958i+VJ+vN15kn6/+feDJu/l/hyej158l6j8ELtJQR1eEoYaSMmM6DkqhJqzNjqAkjTP81nqxPqp4VcNjQnGxthZ9zC7NYxlxt53J9AYdU5GYdgUwnmYWasllHziAb1F/hyRk+Zx5fVe0nBV7Qp4KkDGehKblcq/X4gSYnA6W5yjRCyVNm+sopv4Mql/SFvNGuoXL4alsZW00bZPsMtsgrAQOa7MqbZeRmujWKN4DZZ2T7s3MdnLmoxGPSZqQmp5vSfy5P1l8nnmxOE205zvI02sPYyqkiW7U5tyDgd3kU3FjIYeXmUszmrKCNKD4THZIZHDX+MnmyVw8oyuQ1UNeJJ68Z5MkRDtWQ28OtQm/b3nf7k2+vHPZGXBO52p1ndvdavyybX3wxllv0V/+0TYvaLl18Y/HjLnb1Q6Z928d7nz02a/e49xfw45/aoTKblhx999zbk2uwf710+vTkWxKPLdv8xZucvfHcae3J9jW7ttUsPtu2tclkGt5y4K3us6qRzf6HlgVyDilfs+o418PNHdLSJ5++pffz7j0LtG+24euyPp+0sX7X5lNZu8eRCe8YP+ke/eyFJ0F7PlU3NvWGuTWTPA01+G8/Uo+YmFbwcfv8pb5Rp277Pn/MqVEjzh1/ZtOpuOErgWU9WlTz4lZXwPeayiMm3intbjw5Li7x5NeaP3+rO1rWnnKsamwsewmo5LmexlHkI6GtJ1bEfRCYMEFfP+bhVYR7X8XcwjPJXOaEu0d17VJ2frerI2d68gphA6KZfyoJeQNs9LjuTHjiH81nLsZPDry9ceOXOz0vjc2d9/vsRX3Pdc89Ujm+b2zCV10b+haEdk2tuxyagrf0vFaavX/BgX2VPS/fPfamjHb2hqMB62T7F8uZOXPyPdJf1gZExw/D+yntPW9mrnwAPv8bzps/8g==

View File

@@ -0,0 +1 @@
eNqdVXtwVNUZ3220OjKDBRGwHe12BxBr7u597TOzbZNNWJIQNskueVSZ9Oy5Z3dvcu89l/vY7C4CFTs6KOJcH6NC21ESdp1tComkKAgqdup0QFQmOhpbqrYVaXV0aJyxiiU9u9mUZOCv3j9277nf6/d9v+/7zvZiBmm6iBX7iKgYSAPQIAfd2l7U0CYT6cYvCjIy0lgYbo/G4kOmJk7+MG0Yqh50u4EqurCKFCC6IJbdGcYN08Bwk3dVQhU3wwks5CZTm50y0nWQQrozeMdmJ8QkkmI4g861SJKws9apYQmRo6kjzbllY61TxgKSyIeUalA8pmRREYmWbmgIyM5gEkg6qnUaSFYJYMPUiC3torcU0wgIJJuHhtNYN6z98/EdABAi4g8pEAuikrJ+m8qLaq1DQEkJGKhEUCmokr1VGkBIpYAkZlBhxsoaBaoqiRCU5e5+HSsj1SwoI6eiy8Wlci4USVkxrPEoAVHf7G7PkUIqDsblZVzMaJbSDSAqEqkMJQGCp6BW5C/MFagADhAnVJUkqzBjvH+uDtatfW0ARmPzXAINpq19QJO9/MG53zVTMUQZWcVw++XhqsJL4TgXw7r8Y/Mc6zkFWvsqJDw3zxgZWo6CmPiwnqb3z9ZHQkrKSFtDDM09oyFdJW2B7ikQM8PUtw8TLtBrfyxW+2NvtHWWxL/Ylg03El6sY/G0WeugvY42oDlYmvU4GG+Q44I854i0xUfC1TDxK9IwFteAoicJFU2ztBdh2lQGkFAKX5HwY2XCSTZl+KQtKZRVsY6oKiprpIfqnBkMqrnx4Ex3UVhLAUXMV8JaxyrMD+azgwI0BSGdGZTpQJ7nxAQyYXK8aqJquByGAKJk3Rpiae/+qmS29iWSK00xNEUzR7IU6XMkibJI6ln5rU6nbg17aJp+/nIFAw8gMsdFnq48L87V0JBMSCvHvuSGDwQCR6+sNOuKIyoB/3w0hFE0Fw3DyvrzlytUXeyl9ZHsrDYlCtbkCnLoYz0AIS/rpZlkgKYTwMvxnM+T8EGO9UEv7z1MJl+ExEuZTBVrBqUjSFaRkbMma2WQLc9ZiGM8nJdkWucQFSiZAoqZiUZczkGvc6gakjAQDsAkBQFMI2qm/6xiY+/6+rbmcClGQIYxHhDRw+/Zl/f1wWRfQg7lUNwTiAs+RsFrNrV0xF2NmmfA71Kjme56LHd3x2TI5tRIjO1rohgfz7A+v5/lKMZFu8iUUk39PIwMCp2eVj7CmevyeqSzKSA2RrubeiMZF611NRiDjJoRcS4Qz/Zgo9/or491d3DqYKx9TS6Xp1vCrfGe5oS/q1Py+dZJ0J/IJnztZnN8fdf6dfl6r9ri6W/pEPPJNpIiMNIhd52DNKxIih6qjg1FxoYqD40vSM8OTZ1DqBQm5Jq/Iusca8lqjypSrs4RK1cYkX8go5hooNB6rKDJR0lhzIwohMxNG3LxCNfLh1k6BnSlK94Ge/yg2aer9AbIrkl1cJGGgWRLfBDPqQzrZSi6WhwvzfsrrXkJ+v+J6lAPNXcLUFF15g4rKlhXxGSyEEMamSqrBCVsCmTba6gQXkN11vda4wEGcjwDOC7hY3mB4akGskdnvf1vZwyXr4oikEjjZaB1MM2FnEGe55x1DhmE/F4yY5Wb7u5CuVGV1B/sQ99/4Fpb5anZ2Xl81wS95OjHt9/yI9/q5pOrvtg+fk18/OSOBYsbLPjSiuTNPz6ZWXqiNL3rl+Ntpat3tC7k8Crujc9X2n/W8myIPj3l+PZvJi6Uvv7Xafma9z960K9O//uFc3vOfvj0ha+/2vh7b0f9qeuXfLg3f3t69Lv3OLmJV35w/fKNU9rjy9CGG/+0Y/navbueff1eI/KPp3a2PDDy7r7eJ8++3XvmkcXnJm95caXN5p7wneUXbxsLCXsWsE99eTR86KWFdjoSdO7cfOCuD1aNnmq4jrWfWvKfn/904nv+N6Pf+fPLtz5xtTq86FvNW78ZXRqUIm/ALNe/5dXoOyt2Bz955nzq4v3PnVj9/jsLD996w/Q/a3q9Y8WrPrjwVvbobcdHho7fG3zzht3pnlOK1Dv1ykcfo9Gf3P3JjVsXnXni7cU13e8dWPrqVctCJ5oOS48/1Hn6/rVdj/Xd99WeL++848nPtn26rOHacTuSb25ZqSxww4V/W3Rx4NMjf72Nnjr368deDq42zt+04tBdb02J9LYv7Pcd2VAIp361+/Oh35mZbqX14tY9Z8b+/ug3dptterrG9hlmzlM1Ntt/AUZXjK8=

View File

@@ -1 +1 @@
eNptU39QFGUYPhQbUhmIGZv6R3ZupBqHvdtjr+OApomOEEQ6hBsGRhM/dj9ul9vbXXe/xQO0ApnUKZlZSy2omYjjzllJpSgkMSMpayAHGkeHqRwrM5pKBRRicqDvTjAZ3b++fX887/s+z/s2hmugovKSGNPJiwgqgEH4R9UbwwrcpkEVNYX8EHESGyxyl3jaNYUfTeEQktVMqxXIvAWIiFMkmWcsjOS31tisfqiqwAvVYKXE1o4eqDf7QaACST4oquZMwkal2VMJ80IQtmyqNyuSAPHLrKlQMWMvI+FORBQxeaAgEH5IAKIaQxCgUtIQUQmBopp3vhQBklgoRAIZAWgsJGmSA7xPI9NwHYqm0iNwCPplPBjSlEgVykLtDHMQsHjsS6bEICepSO+6b5RjgGGgjEgoMhLLi179Q28dL6cSLKwSAIIG7lGEUa50wwehTAKBr4GhO1n6cSDLAs+AiN9arUpi5/xMJKqV4f1uIzI6iRkRkd6TvdCHtagWMy/ilmmHhToeIFUEeFHA3JECwC2F5Kj/5L0OGTA+jEPOq6qH7iQfvTdGUvWOQsC4SxZBAoXh9A6g+B32j++1K5qIeD/Uw66i+8vNO/8vR1tsNkt61yJgtVZk9I4qIKiw6y7Jd1MMrBVNUg6SsvUsgoZIqSUZCVfQ26ijCwQKUPQiTm+32TMOK1CV8bLCXSGchjS1MYjFgkPfhOf36wN3wYLUbwRzsGz6KQ+npRK2dMLNICKyJITNnkmnZdopYl2hp9M1X8TzQJW6PAoQ1Sqs1AsLWxFmOE30QdZwPXAfjPkjInlW78PvCspGF+Ruc2nFG9VAYVFengLp9EKQzX8SIBlB0lgS4QuEZHTYANJHiUo7nWG3MU4nTdMOSDkhpKg0J2ScjN3BVjrp9hoe6IbNYiO8kuQV4DFXLukCDAfJkiglejin/MXswnxXZxlZLFVKSCU9wKsHRUmEoRKoYBV0I1oa77UCQzi9OLtc73aydqqqislgYYbdARhAPo/XZYGeu+MHI0cRvfQGLIGCTV/FbE9+Pc4U/ZZu2Fjh++G5lXNrW7ZM/NLU/XDs7eKxM3knv9/t79gFm1rdzdKzBefyD51fMXPtzLWH6pNuz64fa+xn9r63emqw9eKtyetTJ2Zvjxe03hz/Z+I7/+eXXvlyiX+M9+hGVuKjb69pfrUkvro5fsVfAyUDSUfGPgrPVF86MXx15u+JmlObyfSyrSnCwGu/TQ+ezmnvGcnal9TzFhe6fPnJ3tjTwZTDKf1E74/jzRWtMX03hp7Y8vMzhscovb717LSntzP2wrBpX9ZI/fpCNjl3uvQcvcz37k/9S8uWX+mPXftHnJLw+9Z453Dir6adVXE79geW7whIm74orTMm8xL2tx0cWt3YQO+5cv79FSNnYzYfevoGiHGzw490p3rjbnEttXvaOro+fXy7mnqhqGX51QtF617+rK7hYvIyI/+0+4rnsYI1U/WDe8etbWhoYBX4NuPrJZM335lobPhXqH/zYFlXX/mqp0ZndzuO/HkrATM8N7fURLZuWDkdYzL9Byv+bUg=
eNptVGtsFFUUXh6RthaiwUQMaCcbEB+925ndZemuMXRtS+1KH7RLbQHFu3fu7kx3dmaYudP0AYlUIU2BkMFQAsEX3e7i2lKaklSiqDxtgKRAiVqJVQMiVqBi8AHa1NvSAg3Mr5l7zvm+c77v3KmPV2FNFxV5UqsoE6xBROiHbtbHNbzawDp5OxbBRFD4aHFRqb/Z0MS+eQIhqu7JyICqaIMyETRFFZENKZGMKi4jgnUdhrAeDSh8Td/GOmsEVq8iShjLutXDsXZnunU8xepZUWfVFAlbPVZDx5o13YoU2oRM6IEfSxITwQxkKmkxAwOKQZgAhppuXfsaxVB4LNE0JEGDx8ABBCiGDWCnBKyDXUihCI6odB5iaBSftbFr4wKGPB223/JoVFB0YnbcN0A7RAirBGAZKbwoh8y2UK2opjM8DkqQ4ARtT8ajCpmJMMYqgJJYhTurgU6gKEt0LkDECKatmh8VFvlX5eWX5RbGboOa+6CqSiKCI+UZlboit45NC0iNiu8PJ0Y0AVQomZhd3vE2M4prqB0yw9qcbhu7715qCdKOY+po/NN7AypEYYoDxqw2Y7eL996bo+hmSwFERaUTIKGGBLMFahGXc8KUmiGPDGrGs4vvpxsL3qVz2Di7LbNjArBeIyOzJQglHXfc8eBOSYIa6QCsC7Bc1wRoTLQagBTKYH7I7h0XUMJyiAhmM+dw79GwrtINxm/FaBkx9Poo9RKf6o6Prd3uolfGN2FTNIe6ah70C0Y6w7qYAqgxlHgBw7k8DodnAcvkFfhbs8dI/A90qcOvQVkPUqdyx5cmjgRDDmM+kf3AdUmM3Swg8uZn9H0Vy+VFjBLVtzQYUivD4mpfdkmuXIjzDtzVRdFCUBZrR2lH6vrmOtwuxwLeEQA4EOSB0525ELjddg4E7PZM3pnJLXTyruYqEZoJzsYxIUUJSbgdBQGCSMDgtjRmPKei0FuQn91aDkqUgEJ04IchMyorMo6VYo26YSaQpBg8XX8Nx7IXgxJvhbnfzSGHk0OIDbpYJ0J28BJdm3GZ7sgQHbk7o7+BddQKjR4dm9SYtjHJMvpM4U3vlt6s1PXDjddbtu/3dTX4DrRmzWl882RK8qTcM+wFsmHJFfaCmLxs+AVf07asXek363p6mtpnzXFlXixdoaRdORL+/otzXWn/3fht6Je2J6/GH59Ztqf35c5GzTX/cMo7Rw9AbUbF9OfLMhuaG6Z/JzQ2/ylfLbh88tbez+teHxx4sUxYv62846a2dueeG/pftpZDU+Ytx+zvm5fUD56dSba+sfXsnJvdaUNTC6tbHu5RdyyrkN4f/Km7vXvn1uRzvVP7d/262pcX7Qn+7bV9nDqwuf/I7NOPda9LKX8i6UTS9g+eSr34yIWpCCataaqesaZ2yfkvvYsHTmya/+zpfDjNe+iZHHsFTDpzLPlUsfZj70P95uGs050rGdIp/sxseO9S+aLBYPm50uWTvx4Qma6VBw+d/yal88S3T9srZs2eW9XWdPSfafytpQ0AHpl3POXkHzsuNa77anebr2hXR9+15/Yfr62MWJu9nwy9Wrzoes7xoZ7o5R/eHbpW9e90i2V4eIrF9ARTHZMtlv8BHmuAkg==

View File

@@ -1 +1 @@
eNqdVW1sG0UadluuQoJDOqVAdRLqYsEJQnez6931V3Cp4zjNR12ncUJT0J1vvDu2N96v7sw6cXL9cYWrQNypXVqoysePUscuJm2aNrQcbenpuCvtFXEH3J0IEhVV+VFAgBAoIEGBWce5Jmp/3f7weGbeeT+e531mtlWL0EKKoS+ZUHQMLSBhMkHOtqoFt9gQ4UcrGsR5Qy73JlP9+21LmWnOY2yicEsLMBXGMKEOFEYytJYi1yLlAW4h/00V1t2UM4Zcmtk55tUgQiAHkTdMPTzmlQwSSsdk4u2HqkppkALUkFEgQ8awMZWBwELe1ZTXMlToWtkIWt6tvyYrmiFD1V3KmZjmGZHGtpUxXFudrHJkRNiCQCOTLFARJAsYaiYpjBi6vlgmsLWah0AmZe8o5w2EnUOLC5kEkgSJd6hLhqzoOedgblQxV1MyzKoAwxrJXod1mJxaAUKTBqpShJW5U85hYJqqIgF3v2UIGfpEo1oal0x47XbNrY0m2OjYmU6SJKJdLb0lgrhOcYwQZNjDIzTCQNFVAiGtApJPxazvn1i4YQKpQJzQDTadytzhQwttDOSMJ4CUTC1yCSwp74wDS/MLRxeuW7aOFQ061VjvteEam1fD8QzHMYGpRY5RSZec8ToNxxcdhtgq0ZJBfDj72EPz+KhQz+G8s58ThAMWRCbpH/hIhRzDNtpWJlzAN89WG430QrJnnsQLntvL7YQX51R/3l5NcQEqKWHKx/oEihPCvC/MB6l1if6JWCNM/3VpmOq3gI6yhIr4PO1VKW/rBSjXYtcl/JRLOKnGTZ/0KQ1HTANBupGVMzFI980piO5qPzrXXbRh5YCujNbDOqfqzA+PjgzLki3L+eKwxoZGBV7JQFvKTjeOmJbhhiEJ0Rpy9vt57lBjZx77GqmVpTmWZrlXR2jS6FBVNIXgWf9tyBg5ZZFl2VeuNcBEd0TwVYGtf68ttLCgRkhzY191I4RCoZPXN5p3xROTUEB8dbEVgguz4XwaeuVag4aLF1g0MTJvTSuyM3MXmaSDok/MhPggHwqwPlEUWAGEIBviQhzL+WR/6M9E/IpEvLhkmoaFaQQlcmfhkjOzWgMjrs4iPCfyflJpK6XokmrLMGVn2g23BtRKmRZUDSBPxjroGJDykE7V+8+ptm/eEE10xWopkmTMMAoKfPL9JcvSaSmbzmiRbpvBw1GspyxN8aVD6YQh2pIciOfibahL797UWSxuiveYWxL2MM0FfCEuIIpikOYYluEYjjZ4vbetLbNe7tR7Ep1tfWzRH+gG4kDM6var6eSD3XhTVttUjJo+f1f/6IC4rs0uDPWW1qc5ExaVUDZbTG5WChrGMcvkt8QHbMbfNxQl1QCcj7S0UqQ3FYJvpKEQmiiEdvUhhtl5fbRSch2DCLP4NmylOsl1n9TVUiuVcsGEZAQaTCkYRjYYOpzZTTCwi4ocCchxkOhghmzWig32GcxwMj2QeyjR548XgqZtGsPRQk6IM2aiJ7cABJHjabaBg58VgvUuvJr6/5nVsUF6oeDppDn3rlV1A+lKNltJQYsIyKlJqmHL5GK3YIVw3hfd7EwHZYHNZiWBC/plnvVxdBu5Mue9/e96KLuvQhWopMeKknM0z0e8YUHgva2UBiJBP5FT/fX7fcXtST339yVo1RM3eurfMnVjIrmUW3Hyi8mR2Zu3/yxw4+37f3XXL/WO9Sti3d9MPXx59j+zB52VP5z4+MjvVp76mn1371vrTzd59iFu+b7uczWrMvZd8MrIoFGrHrxjw8Y1H1xUhbGts98cPffb5vDG6Hmxec0B5taBt6jCkbGHSrv/+LZ4x+y6j9nzRz7dfuCedU3wD5c+/9D3qfXZnn9/Nn5lYuDp8q3Lue3Hb/B8eCIkdTx7ac/K2CRa23xz9LZ7zrzR4/nrrscf23nn5b+8c+Tyc53xp9/2T6X/QX90Q8/uNWvvfeAX3S9euW1F1y0B1Hp81/uPfPW3A5nJJ9CqHS+Fz773+n/PTH/7+co3ey/eT8fOr/ryqWPh3Tft8O38051LdzX9/ESP58KzSz5JP//oV/2h37Davzpex4ePN1XP/fPs9N17B59punD6k+8vHTvrjPsvqmv3lZu5B+4dO7f55dPGD7mpyar25Rt3K8+3v7f8zNDUO8fWfnFSf62819rzPUH3xx+XeW46fd+7zlKP5yeHyHrs
eNqdVX1sE+cdDnSsdNqg7WhHitrerFYgyDl3vvP5S+4WO1kwIbWTeCFmouH13Wv74rt7L/fh2CaglbVoKF3bK6hp1Q/RJNhtloQAUUrDR2Gj2ibaIjZ1ItCVdm01NqXrumkIqSvda8cZieCvWbLPd+/v83l+z+92FjNQ00WkLBoRFQNqgDfwjW7tLGqw24S68VhBhkYKCUORcFt00NTE6bUpw1B1b20tUEU7UqECRDuP5NoMXcungFGL/6sSLIcZiiMhNy1ts8lQ10ES6jbvT7bZeIQzKYbNa4tCSSJkSACiC6XxJY5Mg4hDoOm2GpuGJIhtTB1qtu1bamwyEqCEHyRVg2TsTtIwtTjCdrqhQSDbvAkg6XB7MQWBgFt6aiiFdMMaW1jkAcDzEPtDhUeCqCSt0WReVGsIASYkYMBhXJoCyxBYw2kIVRJIYgYWZr2scaCqksiD0nltl46UkUorpJFT4Y3Hw6XaSdy3YlgTYVxEXag2ksNoKgRt52g7PZ4ldQOIioThISWA6ymo5fOj8w9UwKdxELLClFWYdR6bb4N0a38z4MNtC0ICjU9Z+4Emc+zh+c81UzFEGVrFYOTGdJXD6+kYO+2wuw8uCKznFN7aX4b89QXO0NByJI9wDOsVamwOHwkqSSNlDdKU+1UN6iqeDfizAnYzTH3nEOYCvv3bYmVIBsJNcyR+UPW9oXrMi3U8mjJrCIojmoFGOCiHk6A5L8N4WQ/R2BwdCVbSRG9Kw8GoBhQ9galomKO9yKdMJQ2F4eBNCT9eIhx3UyofjyEJsyrSIVmpyhrpIFtn1UGG6g/PTheJtCRQxHw5rXW8zHxPPtsj8KYgpDI9MuXJs4wYhyafmKi4qBoqpcEFkbJuDTI0O1Y5mcN+GPdKkTRFUvRUltQwFJIoixjP8m9Foro15KQo6siNBgZWFRZzkaXKnxPzLTQoY9JKua+HYT0ez7GbG82FYrCJx8VNLbTS4fxqaIesH7nRoBJigNJHsnPWpChY0w/gm06WYwDnctJ0nIlzgHYybifPO+Jx2gGcXMLlfAPrXORxlBKZKtIMUoc83kdGzpqukUG2pDM/g/043KmPEBVeMgXYZsbrUakH3UeoGpQQEA7wCZIHfAqSs/NnFetjD9c1h4LDbbjIIEJpET5zYdHKzk4+0RmX/XqCozktHNmoC4w9vSnTwWyoz6wPqmZrY0NQkNrzrhTNBc0mj9xD0i6WdrjcboeHpO2UHauUpELOphArpSI9kiMUYlvr8jFRocS0XbBvyCWNpBM1mNFQw8b2bgwzn3C2bwyCSDYaVpxNdTnUGg8505nG9nadk9nopqA7nd0cyNY1uzqCsXSek1uEaCJmpDwUZW+I4RaBkfLX+gg8sCIG3V+RDYllQ5ZE4/JSc6LxEUIZGL994Yr0Eevxfg8rUs5HtJUQhvgKZNgmGtD/MFLg9F4MjJkRBb87Bto7pM0bOsXubn0zciZBc93miJnPIEHgskoItCQ6+fWmqnW75yHD0k6SqoDDUay7PJrXS/8/q5rsIOdvATKszr7IigrSFTGRKLRBDavKGuYlZAp422uwEPwR2VoXsyY8NM+wNO9h3dDjTiQ8ZADv0blo/9sZQ6VXRRFIePAyvHU4xfhtXpZlbD5CBn43hzVWft09WigNqpJ8a9Gx+/uWVpU/t+Dv118/0XpKuUh9+/hf1rlf2jOQufv0O/CHj27Z9TK3Ynp82a6ac5P3xO6sz8daPrl0q0eZ2bpqvG9Zb2/4c6t3S9XSwbeWPdb14C/t59/b3tKbi5y9OPjum5/9AHoe6n3yhQPTH3187WB2xZnT/3iPWbuD/9OaS0v6pGO+X68dEDYOT1/Z/izqs7256qfvpid2//Ebj4wibt3f+U83vR47Uf38qOtboZkPv1xc9aGZ3dNY/KL//KnP/jn6++9bUmTG3lxly7+wNnBndfyvHTWrI7nfXP7mxS+vLJk4d5IMPO4I3NVy377lKvt+IL30nu/K937H+9Wil9/fc7TxwW2BfXd/tOa2fP/PDx26qlwLnZ5a/Xxg6oMLB3dMHZk8cdu5Teyf+/Pyi69MPrHmd0c/efr8rfS+4pKTTwHvVz/eesfA54v7Z5Zffmbr5ENndov3M6pv50zswmsn7ludH23vEsSadwrr9kbJv+32jAauXLhaPf4cvPaHty9NnH3jF+lrjTsW7/3VyVfP3N6yyrw329X33Ni/O+X+6uW7loD/iCuelaiCPzUC0dmeBy4/vvLoIW7s1MqB6ifF8/8Sv1hVouiWqgF09VwT5uu/A4Sdew==

View File

@@ -1 +1 @@
eNptVWtsFFUULhIU/CMiKlEC40JCgM50Zmf21Vqx7FJobNnSXWiXh5u7d+52pzuvzmPZLaKhEklEsKPGxII8t7ultuVRVARKAgmCkYAGDFlMIEGMKCKJ0SiBgHe3W2kD82N37j3nfuc753z3THs2gTRdUOQxvYJsIA1AAy90qz2roVYT6cb6jISMmMKn6/2B4G5TE3JzYoah6uVlZUAVKEVFMhAoqEhlCaYMxoBRht9VERVg0hGFT+XeXWOTkK6DZqTbyokVa2xQwaFkAy9sqgDjBCA0IPOKRMimFEGarZSwaYqI8nZTx+u1q/COpPBIzG81qwbJUg7SMLWIkveV8S6D/3VDQ0DCiygQdYQ3DCSpOCXsmMeiKc/abAwBHid8uWRiOqbohtU/Oom9AEKE8ZEMFV6Qm62+5jZBLSV4FBWBgXowcxkVSmT1xBFSSSAKCZQZOmXtA6oqChDk7WUtuiL3FjMljZSKHjb35LMjcV1kwzroxySqasrqU7jaMsFQnJui9yVJ3QCCLOLykSLAfDJqwX5kpEEFMI5ByGInrczQ4f6RPopuddUB6A+MggQajFldQJOc3MDIfc2UDUFCVtZb/3C4ovFBOJZiGMq1fxSwnpKh1VVoxJejDiNDS5FQwRjWTjoDFSUuICv3ZzgMo+GIVEkvCqyGSbfaEoQN1aK5mIm0LnC5mpY6fLpz4aKGUFgPaouESFiohiTjsnsYl8PhYEmGoimGYkifSYVbW5OaZ+GS1jgdFms4KVXbxFeFFtY7l4nVcktESbqF+fZQi1iPElEoBhtbwi2IYpUmuQ60ur1L6huqG+mQ7EkmI0Ba4vVG/WJVBYHZmQmBr+SWsUq9BFoT82vbgn7GHpcSTVRYEhvtXkOJL/fVsgbV5mUXBVtCI+jRmCFdZOikOTedf/qHtSEiudmIWbsZ1t2tIV3F9wa9ncElM0y9PY11iM6czhYv0C7/aw8k/FzahzVpDQZjZinBuAg/NAg7becIhitn7eUsSyysC/Z6i2GCj5Tg/iC+enoUy3DBsOSzMGbKccT3eB8p9sG82HEn8/TxLSVRUlV0RBZZWb1NZMPQ5CBrfANDN4tUtGYgC22FsNZgQfWr25KreWjyfCyxWqI9bRwrRJAJoweLR1RNyYfBhEhJt3Y7HPb+omVYdz04V5pkaJJmDidJfM2RKEgCrmfhtzi+dCvtwMU+9LAD7hfCgy7LFbpBHxvpoSEJCzYf+wEM5/F4jj7aaRiKxS4el+vwaC8djWTD2CX90MMORYhdtN6bHPYmBd7KzcSLsJPnaIhQBEVRlHaxHIMflkEOYI9GgRugr/DoEyBGyTdTVTSD1BHEs9pIWblSCSTzM6aSZRysE2daQQgyFE0eBcyIT8nnoFcQqoZEBfB7vdWkF8AYIgMF/VlZX2hxVV2N94smcqSQSL869J3IyoouC9FoJoA03BirB4qKyeNhqaEMxmqoClkH3Zg95umMuu2Ig9BJzsdjaBjtf9ml85M2C0TMPQGtgRhbaSvnONZWQUig0u3EbSp8TdZl8rnKzSfHLJm+cXxJ4RkrdhyXT9ATfbduPxk/N8G2Y7L14qSWcX1t3RcCSxdsHphJbb55aUvPs1evzN0wueH4yQ2pbblcpe+9db1EM/HC4l2fzwlJ21+5+OOGU0fu/PLzxUtKmB/M/hDf8nFiYJ3bXXt7Xfu9lc+A5Z91pL+fVTpVO9HRSUW/NSyUO/pE3576cSuk2Z/4D6wvb+zkO88eOJ6bMvs779nLtq+nVPylXO+ee/edrXU/zetacCu1qWP7PObxs907Sv55ve1qJ3HsVcDt/COe3cioh268/NRJI3thx/RzH360p4/tmv5v6EoNuTbgvnb5esdv3zRd/5WJ/D54pftO46Retq+r/cw0c3zt+cDUlbK7fewM78pN10JvzOjt7jgUfNOYOHEl17Rq6/nYwdzfp1c8nb2Z7t/bsOulCXffnyVu/PSCo/wG/9bVex+cuv58Scn9+2NL7m5bOg08VlLyH7A2NRs=
eNptVXlsFGUUb8FYg4RUg2iIqduVxAQ725md2ZNU6LW1Qnd7LJTSYPn2m293pztX59ijFdGCRCIIg3IJVqBlF2stYhsuqTFeoCiJHNESQvxDMFYTbqMkgN9ut9IGJtnNzLz3/d7vvfd7bzpTUaSonCTm9nGihhQANfygGp0pBbXpSNVWJQWkhSW2p9bX4O/WFW54dljTZNVdXAxkziLJSAScBUpCcZQqhmGgFeN7mUcZmJ6AxCaG4x1mAakqCCHV7G7uMEMJRxI1s9ssczBiAiYFiKwkmERdCCDFXGRWJB5hq67ip+VLi8yCxCIevwjJGkFbbISmKwEJ+6magoBgdgcBr6Iis4YEGWeArfg0aXEtT4URYHF6F3Lye8KSqhn9EynvAxAijIlEKLGcGDI+DrVzcpGJRUEeaKgXExVRpiBGbwQhmQA8F0XJ0VPGJ0CWeQ6CtL24VZXEvmxihJaQ0f3m3nQ+BK6CqBmDPkyitLq4NoFrK5ooi52yUJ/ECVUDnMjjYhE8wHyScsb+2XiDDGAEgxDZvhnJ0cP9430k1dhTA6CvYQIkUGDY2AMUwc4MjH+v6KLGCchIldfeHy5rvBeOtlBWi3P/BGA1IUJjT6YNByccRpqSIKCEMYxdZBJKUoRDxrncvJYWGGwJCCWJKqUxHvR4xPkeuSHKxSo4ts7ZtiDm9NQtbK3iWgKKpC9ssQVtchNBORjK6nA6rTaCspAWnDNRHobWhL4gZqulyqpF2MjRkTaucYE34q22+ivraOgi+RoZvORzufyNXrHyRVQpu1wKFKOeejVQB1B9VX29FtNL6/yoIVIdb5VsdY3WJaXQWeubXweiMsW3Ly5nHKxVnWPClPUox5Z4I1RFqz8QtdlfXBimvazPW9nkrWTb2LL6Jb4oYnxtLQ0tiwOLrNVwHGcnZSfILG07yTjJ9NU/phgeiSEtbHRTVnKvglQZzw5amcSF1HS1swerE/1wPJUdot2++feEPaOnAivVGPKH9SITaTfVAMVkJa02E2V307SbsZuqavx95dkw/gcKc78fD6AaxOKsHBuEFAzrYgSxveUPHIGh9Ajg/qbp42ElUFyWVERkWRl9i4n60e1BVFcMjM4bISkhIHLtmbDGUGYWYu3xGAt1lg1HYwLpamdoLoB0GBzMHpEVKR0GEyIE1ei2knR/1jKmxl6cK0lQJEFSR+IEnn3EcwKH65n5z64w1eix4WIfut9BkyIIL7sUk+kG+fl4DwUJWMbp2PdgGJfLdfTBTmNQNHZxORxHJnqpaDwbyiqoh+53yELsJtW++Jg3wbHG8Cz80GIjaZvL6bJbqQAJSARJu9NJIRsZDFIOysY4DuNtyEGMkm6mLCkaoSKI97WWMIaLBBBPb54SmrLRdpzpHBMnQl5nUYMeqJDSOWCBywriJcDug0ECAhhGxKj+jFRFk7e0prr8wGJivJAInzz6rUiJkipywWCyASm4MUYv5CWdxStUQclyD1Ff2mQMuihIM1TA5gQuxslSFFGGl9MY2v+y60nv3xTgMfcoNAbCdInZzTC0eY5JACVOO25T5ovyejKdqxj6Jnf1M289kpO5JuPf3btrN/4ofknmr7qcmHIiNG/Wa28qked7u5o/HFnfe/yX7V+f2U90nO7Mv/yya97j/tk3N3716px3hpdefrZs73Tm2XcbpyaEs//Yaz4oODBw+edke9eFfaeOne0/44hdTAKpWJs+PTiU+3bo7Krt52bP4248puZNLTz07YnEjqLTTNPgTvsk/4ZNM44NfBdYu655+1XGO+M5hGbl+ehrBY+VHf/89uq1S77YHGTd7sbWK12b5g30r8kfWTvlie+PbinY5nGPFDpmlh7c//uJQrmW6Xpj2ZG6Ef9cbUV9cvBk3+01sZtlrX98dD73uievcf23pzYXmMteOLfmvbc2nHE3NP/w0yud01a3vTLYfXtax9YdT81kW3duW9e/9eKkDdUX8x7dtePqlIp9fx08NrPwx5yKlcaK0OFLy59uHrk29++SX58u6jz+8eTCfy8eXb595ZUv7rT+uej8w8mC81t+29D90JMnTTuXrXN3dd16v2rv9bKRx3fonw6t6LhUfmNyutiTc26dzvfcmZST8x/1AE4b

View File

@@ -1 +1 @@
eNptVXtsE3UcH8MgUUEJCkQTvBTlIbvurr2+NhfoujoK6zbWMcoIlOvdr71b77V7dO0Ir/GcEOBEIT54CKMlzRiMbTwm4x/kFQyJgpgJ6CAxPkZMQBNBjPhr18kWuD96/X2/39/n+/p8v9eUjAJZYUVhRCsrqEAmKRUeFL0pKYN6DSjqugQPVEakWyorfNUHNJntfYdRVUkpyM8nJdYoSkAgWSMl8vlRPJ9iSDUf/pc4kIFpCYp0vHf7cgMPFIUMA8VQgCxebqBE6EpQ4cFQDTgO4QFCInViBL6CoqYiQUDKiiEPMcgiB9JWmgJkw4olUMKLNODSorCkomajBVU1OSimbQUoxeFbUWVA8vAQIjkFQIEKeAkmBg3TWJjRtiLJAJKGaf+Q80oLIyqq3jY8lSMkRQGIDwRKpFkhrB8ON7JSHkKDEEeqIAXjF0CmUHoqAoCEkhwbBYmBW/pRUpI4liLT+vw6RRRas/mialwCT6tT6exQWB1B1TsrYBBOT35lHNZcQHAjYTdiR2OoopKswMEiohwJ40lIGf0XQxUSSUUgCJrtp54YuNw21EZU9INekqrwDYMkZYrRD5IybyU6hsplTVBZHuhJV+XT7rLKJ+7MRhw32tqHAStxgdIPZhpxYthloMpxlBIhhv45lqBEMcICvfd+IECFAkG+qDwQrGFrg3MCxVUYUeOiTcAPfJgvEvVWe+qKrQsX0VUxu8BFF5U7UdxmcuA2i8XqQHEjZsSNOIo5Of8cY71pzkJ/CRVjmGrVNN8bwGzEArbGBVSV9ou1NQruqdHCc/C4sQEPlS1g3FXOuFaplXKlbndFsadeNhbzrgYz4W60+nhigRguRGB0WpSli2qpWqu7qsEi1FviXo32WZ0RWTLFJDzgsTJOsnw+VQYwvMFmLhkantliQrFshFaMsGPpp22QGxwQwiqjH8AJ4pAMFAlOD1ibgCVTNaWpBfIQfHUxmR2j/RXznlB4QksJ5KTeU81oeQhuQyooFTFhJgLBiQKzqcCCI6Xe6lZX1k31MynYXi2TghKCNHQPUj5JMZoQAXTK9Uyy96TJDjuZDh9OKQpikqgANBuV3upHqwb2B+op6RiYLFSUw6TANmbc6j0Z1jc0xhpoSqNpJtrAY45GwswGgUaFOrNXJFlMu4EBobyiH7BhjrasZpB3KZgrhuIYiuHdMRSOOeBYnoX1zPxml5iit1hgsU8+baDCrQPXXZLIdAM7M9RCBjwkbNr3ExjC4XCcfrbRIJQZmjhslu7hVgoYGg1u4pWTTxtkIfZjSmts0Bplab33LXgI0LagzRHEYeEpm4V2OIAdd9B0KEiE7BhuxSyn4OpjKYiSbqYkyiqqAApubDWu9+bxZCy9Y4rMuMVshZkWIqxAcRoNfFqwREznoBQikgw4kaSPuN5DXSTFANSX4Z+eLIGT5vW4jvvRoURCK6SBr0VSEBWBDYUSPiDDxugpihM1Gi5LGSQgVpVzkd5ppwkcw3AToIHDHgI0WgzX0CDa/7RrSW/aJMnB2KOU3sGYiwwFBGE2FCI8WWS3wjZlvilrEulchfC5EVvf3Dw6J/OM5OY7hRvYK6f7X1u8VG4eeWlD3Ze7kJ5XR8vexftmvPR+7eYtM4zz2vpeWPN46faavOcvrd/48e7bV+5GRiBlM63IhztbG8WJf728bdXeVOnf3YF/7ScDroed/an7hdf767fdKZhxKO48J6becPecuLjhZnPNiLcp+fD1yzfqlnzXbxpf1MXsaP3jtP/WtRcnF7s2H6u9vqNqx/xJ50atnJiTc+Sht7B7wuPmMfTeLX9OOnC3krviyjFs927ajn9y8Zv2k3s87q1f/9S16rdxtVdzI+tGgdVT990823d7mT/s33H23tg9tzoebZg7dpxzyvTXuf62By1zlXurL46lnvv1/J7claf+MfcJm0wgurZr/Mp3Nx699nC297MH5jPmKyU9Y6ZPOdUp0fcNHRdy+yqMkdiuwIPZPT+jVycvMbXWTCvE94an7vLpTc0Hd9/79tNfdt847t55/vzmY+s/mEsVzrx8x/D9hVmzOuznzG2P+qrKyg1b9fYff9/Sjz1Ydv+h/NGFO7Ng4R8/Hpkzratb78nNyfkPIFJRJQ==
eNptVXlsFGUULxCPqFFT0UQ82G40IdKZndnZs4e17LaldEuPXbSFkPrtN9/sTHeuzrHd3arYVv4weI0oEpNKeu1iKQWlYgUxHqmagBojlRQV9Q/iAcYgicYL/Ha7lTYwyR4z732/93vv/d6bvmwCabqgyEvGBdlAGoAGvtGtvqyGukykG09kJGTwCjvS3BSODJuaMHsfbxiqXuZwAFUgFRXJQCChIjkStAPywHDg/6qI8jAjUYVNzYo9dgnpOogh3V62qccOFRxJNuxl9ggSRZuEbMDWqcTxT1QxDVsUAU23l9o1RUTYx9SRZn90c6ldUlgk4gcx1SAY0k0YphZVsJ9uaAhI9jIOiDp6NMsjwOKUThXdPMIrumFNLKa5D0CIMAKSocIKcszaG0sLaqmNRZwIDDSGyckoXwRrLI6QSgBRSKDM3ClrP1BVUYAgZ3d06oo8XkiGMFIqutw8lmNP4Mxlw5pswiSq6x3NKVxP2UaTHpqk9ycJ3QCCLOICESLAfDJq3n54oUEFMI5BiEKvrMzc4YmFPopujTYC2BReBAk0yFujQJM8rgMLn2umbAgSsrKB5svDFYyXwjEk7SR9ry0C1lMytEbzRX9z0WFkaCkCKhjDGqQyUFHiArJOLrmmowNyHVGpsjb9oDvQ3bRefyjlrVlvxkFAhrViMl0fqlFrYmRYpM0Etc5sp/hGgva6aKfX52PcBE1SJM6ZqDdDBt1c16lIzqivfmMqInVQzTBc19jA1DJiWCATDfUh3ddYw7W2mu4gUtvWyc1NnBxYX90AIw0BMaShQFuQbE9Wp9zORMuGuDvUBTtCAlPdqdVHNRALAX96Y7Mn2F5uw5TNhMBWJmtJ0q02JJy6DMNdUF3DC2l6bXojG6+JSeFgROl6MBzkw+2tvpYFnCnKQ1AF2h7K5aNy18S8YkQkxwzeGqYp324N6SqeF9SfwYU0TL1vBKsTHfs4WxicoaaGS8K+bSSIlWodifBmqY3y2BqBZnNSTreN9pQxTJnbbatrjIwHCmEiVxTmaxENyDqHxVkzPwhZyJtyHLFjgSuOwJHcCOD+5ujj0SRQUlV0RBRYWeNtROvcxiDqgwfm5o1QtBiQhXQ+rHUkPwvd6WQ3C02W5RPdEuVPuxghikzITRaOqJqSC4MJEZJuDTNO70TBMq/GMZwrRdAUQdGHkoSGSyEKkoDrmf8urC3dGsHlp6YudzDwpsELLuvKd4N6Z6GHhiQs41zsSzAuv9//9pWd5qEY7OL3eg4t9tLRQja0U9KnLncoQAxR+nhy3psQWGv2HnzTwXm8yIO8XhhlIeeNAhcHnSwLvBTnpwDyM2/h3SdAjJJrpqpoBqEjiHe0kbJmSyWQzG2eSoZ2Mx6cablNkKFosihsRoNKLge93KZqSFQAuw9yBASQR8Sc/qxssH19dWN94GAbsVBIRJM6937IyoouCxyXCSMNN8Yag6JisniFaigTqCVaq9utST8NGRfN0ZwPeZiojyXW4OU0j/a/7EZy+zcLRMw9Aa0DPFNpL3O5GHu5TQKVPg9uU/4t0pvJ5SrHppfsXbnt2qL8tQx/Ll58qvWo/BV189tnVh+s+KoXffTGmYbeO7ctvap4eeU9Wx/YHn+SmL53ak/F91scq1bv6Ms84Kg4dvYGbsffrUWrYjM3viBM7vT89edMokcnPt/y1Ndffvbtl+lzp3ue/Ca7r/fu/pfRVb9sGbQemtk66Ekvr1smFXe+d3ajTB49zR3evLdn6N5t9x/9edX5mek0ufnkF4+0kCf6Xy+JPTdzo3x90eMvXfjk9v7pujf6p8/eKljtJ3YnfigpevHjWFDgPhrq3z27dsV1fYPP/jucWnqmdG3//rqBd8u43/64lny3/7bQrl/f3zy1Zri8Vl1608slz5/46YPjzu8H4cDqrZ/+vaTqFSY9Bocqb/rn9qqp76ZeLe7MPHd+f3Oksqe06LHfD/946niYbrljtPivp0vu2LFn+8CKN6tKHNe0rp1cee409/szm1DL5PmK2VHnSRdfd3oq+U173S1DO40L5cd6TmVPkccvwqriDx9ePrkhxA+kNiXvbNipnO+AP9z1R++t8FD31au7dv5W1Vbx46e7JmrP3XL9wZn3tq4IhyaqBi7senZ6Zb4ny4oyzKFhP27Qf7FZXVw=

View File

@@ -0,0 +1 @@
eNqdVWtwE9cVtovbuCmQtKGY0OmgKAmQ4JV3tbJerpzIsuWY2JaxZPzg4V7tXkkr78t7d23JxpNCMm1TG6dbm2nKQDLBD4HiBxQSHFI8GVIaaGknkISODW1DJo/OZMiESaZJhhL36uFgD/yqfkjavd95fd855+5KdEAFcZKYO8aJKlQAo+IHpO9KKLBdg0h9alSAakRih+t8/sCQpnAzD0dUVUbOoiIgcyZJhiLgTIwkFHVQRUwEqEX4v8zDtJvhoMTGZ3Mf6TYKECEQhsjo3NJtZCQcSlSNTmMjNliHDGoEGjohwD+KgRMNfu8jxkKjIvEQQzQEFWPPtkKjILGQxy/CskpYJELgRA6jkKpAIBidIcAjWGhUoSDjKlRNwbakicRvJInPhFXjcsphSBPTRWLjb/46u40iEFKnYai2ZlPBABYiRuHkDMZYCdWFqZowQAYKtsPEoZQPWcF8KCoH00+8xIB579nYOFtODBt7enB5mF9OgSxO7SYSl5lFSsEoZFSM7NnWk4hAwOIQzwxHJKTqE4uJnwQMAzEnUGQkFnvXx8NdnFxoYGGIBypMYrZFmC5TT7ZBKBOA5zrgaMZKPwxkmecy4YuiSBLHsuoQqURuPU6m9CCwlKKqH/PhJNxVRXVx3CGigTJZKRN1OEYgFXAijxUneIDzGZXT568uPJAB04adENnu00czxhMLMRLSR2oA4/MvcgkUJqKPAEWwWo4ufK9oosoJUE946m4Nlz28GY42UWaT/cgixyguMvpIupGOLzKGqhInGAn70F8gJ+b54aEYViP6EE05DioQybjf4ZOj2EzV0K5hrAU8dyaR7fsDvsfnRfxnTsFwOdZFPxmIaIUG0mqoAYrBTJqLDZTVSdNOC22orAmMebJhAreV4UhAASIKYSkq5mVPMBFNbINs0nNbwU+mBMfVpNLHo0XAmCwhSGSz0seaiPrMxBNV5Ucz3UVIShiIXFc6rH4yrXxnV6yTZTSWjXR0CqSjy0JzQagxoWNZEzwCqTA4IUJAmBwrOZE9mec+iWslCYokSOpEjMCzCnlO4DCf6e/s2kH6cDFJklO3AlSpDeIFlbCQ6c/0QoQCBSxaKvZNNxaHw/GH24PmXdEY4rAVn1iMQnBhNpRZQFO3ArIuDpBoLDaPJjhWn3kAP7SabRaKtNtIyk4DlmSDdiugg5AGVruZCdqLLa+k9gGDvaTElCVFJRBk8I5V4/pMoQBiqTlz0VQxZpEkS/BqZHiNhX4tWC6lakAlBlmBvATYSSZEMICJQCLTf3qivLnWXVPlSfpxkh5JauPgb2ZzV7W2MqHWoOBSZU+V2xoHQnu8vbjGjTRLwL7JXS/5N1eihvYyjxm1uDvaGqjI5hqCwkWYbXa7mSYoE2nCU0qwgUoEo22g1WNTWpHVFmDZOMNuDDTEvU1Nzc1as6MOVfg2bWLKKnzmmKUOKqQnYAGo3RGtkarVliaOlWJ2WfXxUTcdEDyq6tjULlVX14r18VhluLjM29q5WeVsSK3AJeJl6yoqMeCGxfsSubJjQ+CxIVJDY3OS80NTYmDTxLhMi1dkieExfGf5RD5eYvCnGIb4F+9tP6dCV60kwplBTIzWwbEu2ttIosdpqoGOhsrsUm2jucsW0hyVDc0+r6XFvtlnCtdvjJV7y7xoATN2i40gs+RYSYs93Zo3U/8/s3q5iVi4BQifnLmcE6KERC4UGvVDBU+VnmR4SWPxtlfgqMdL1Lub9WMOiqEtFCgO2c1BOmiHRBneo/PevtkZw6mrIgF43HgdjH40QruMTouFNpYYBOCyW/GMpa/wnaOZi+v0t5as6c3PSX+W9PlrpEvk0pPXG/N/8oZn+u7u6ZfO3DkWOQ5c/OvmVcnqSxvZswP/Hhd+PVd6ppZf9/HPvvpp/o4dz7w/dOyuVUt+5z716OC+8oYXP3X2rxFLC0ufOHH9P9e+tnad/WT7J1xNdLzz4y/Ia0s/m6pwr7v4I5Bsua9qqKSp6f2ntmsrifBn6/s/6l15sPrnf37rV3tPjXx3wyFof/5vv71iuTN8Td+2+qE33aem7+ijXpbmpn37r9zzAO90VDys9hfkVz3/x1VNQzvyptCF7/89byB3hfee6L9OiIOrc9m+jd9ujK6/3HP9fJzs3bCn9MZ/lRvvbCU+atwy+8Hx0JaZL7e9Yh7q2v85+od10Nby+drdn5Y+t3UFs9Qy2SLuHPzaW/vg7PeczGvC+cDBs8eXX1+2/v6pgsuP/n7nHRMzuz9oeWzdue8sl/9Umje5/4m+viPP2vpfHGLe+PKF1p3mCuGvQ6/KK5RDIxv8vZdfGti9bHntO7PON/ddvTvaltxbby55e3vHvdoBcuvR3H3j/YZ734v+cnLlQOPbF5Y+/ZeO0zfGf7B18Nll5650XFs7EMrLGzo7J35YfFXsOtQ7p6/Jue/w2nMXZoNrT3/1mmdvxVz+3J6L3ZetS5dfLXqIfPDpi9L9r++zJbtXv/vcj8GeD+sLIm0bLn2RG99f23je+J5U8NYP837x7l1Y7bm5JTnanIMn8nJy/gcqPgKj

View File

@@ -1 +1 @@
eNptU29QFGUYx5wIUyY/ZMWHar0ZKp3bu13uuD9gxHHkpUgccBkmgu/tvtwtt7e77b6LdxglSM2U0swOqEzK9MfjLq9LITGaTPtQFs1kMjZDkijqJHxIbGrSGG2g987DZHQ/vfv8+T3P8/s9T1usCcoKJwoLEpyAoAwYhH8UrS0mw1dVqKD2aBAiv8hG3BXVnv2qzI3m+hGSlAKjEUicAQjIL4sSxxgYMWhsoo1BqCjAB5WIV2TDo7u26oIgVI/EABQUXQFBU3lmPaGbC8KWjVt1sshD/NKpCpR12MuIuBMBJU0eyPNEEBKAaMQQBPCKKiK8EMiKrmVTEkhkIZ8MZHigspA0kX7ABVQyD9ehTJQ1CYdgUMKDIVVOVqEMVEvMDwGLxz6fsTTiFxWk9d81yiHAMFBCJBQYkeUEn/apr5mT9AQLG3iAYBz3KMAUV1o8AKFEAp5rgtFbWVofkCSeY0DSb2xURCGRnolEYQne7Y4nRycxIwLSBh1zfRjdYcy8gFs2WQxUX4hUEOAEHnNH8gC3FJVS/qN3OiTABDAOmVZVi95KPnhnjKhoveWAqaieBwlkxq/1AjloMR++0y6rAuKCUIs53XeXSzv/L2cy0LTB2j8PWAkLjNbbAHgF9t8m+XZKHGtlIikLSdGD86AhksMkI+IK2ofUwTkCeSj4kF/bT5vtH8tQkfCywu1RnIZUpS2CxYI/DsXS+/VRRdmc1DsjpVg27ZjHr+oJ2kpUMIhILglBmwtMeQUmK+Eq9ySc6SKee6rU75GBoDRgpZ6f24oY41eFAGTjznvuQzx9RCTHal/hdz1F01V0CWdRnSG6zOYoAeH1DkuTy30kRDK8qLIkwhcIydSwIaSNEnZLvs1szwdsvt1M51tsXjND2/OslB2Txlqphv1NHNDitIEmfKLo4+Eh52rSCRg/JKtTlGix0g0vOsrXOBM1ZJXoFZFCeoBPiwiiAKPVUMYqaPFUabzXMozi9CrHBm3Axpqphgav18bCPAtgAFmC12WOntvjR5JHkbr0ViyBjE0nFmx5ckdWRupbuK6yPnC2eMnsynjdX5fazTlouneYHlpdWJB5bnPOxYnOlpffz/3ldG7+8vGWM+c2V7SN1y/qoZv1/T9P9B2fGWl5bebPy+oXN29M28fEolVTT5y3VtR80PFs+coqNRu0bo8skuKVJ/tBa09obY13V+31gTHD9h3BOmFk75bfp25kLk60d5zp+zc8s8f/d0F4svLSu5mvPGObLhe2+bhFjTmbu2qvhk+Nf5NVFwKmQ9J7npUvfTLYN1kYr11/32JzieRqLjvw9pVG+Y/fBtx9F648UtSara3qLs593HH14sPktoUdKzqGfW8NudZ8P3Ry49kHm05X9zzd3LXkCHt9afY+hy6WuWldUceFLxPa3mVjnQ7X8Gf7iJFO7uLyE7ujv1JTEzX6YepRPel2vX68+c099ffH17zx2NABavcLVx7a/cB3X7efLnvqYFbZTyuWdU+e6vLtHPMVdg98+87la58X/3D9OXT4WO0MZnh2dmHG0WulN/9ZkJHxH+xyZNQ=
eNptVH1QFGUYhxzDRi3I0rLUnQOyyXuP3dvz4BgtTzBkEA7hciATfG/3vbuVvX2X/bjghBoVZgJMW5qxKSYy77ijG0IuFC0/pkadMZWZJv8orBynj2HUP0qzHM2il09ldP/afZ/n+f2e5/d73t0eCyJFFbCU3CNIGlIgp5EP1dgeU1CtjlStKRpAmh/zkVJXuTusK8JQpl/TZDU3KwvKggVKml/BssBZOBzICjJZAaSq0IfUiAfz9UNtW00BWFet4RokqaZchrbazKbJFFPuxq0mBYvIlGvSVaSYzCYOkyYkjRy4kShSAURBagsppqAH6xrlQVBRTY2bCAbmkUjSOBHqPAIs8EOhRgdWQkCzdDaB0lBAJvNoukLwaQvdGPMjyJNhLyalRfxY1YzEfQPshxyHZA0gicO8IPmMT30hQTZTPPKKUENx0p6ExhQy4jUIyQCKQhD11wFVg4IkkrmAJgQQadX4pMTlri4o3LCmJDoOavRBWRYFDo6WZ21RsdQzMS3Q6mV0fzg+qgkgQkmaccg52WZWaT2xQ6Joi81hofvupRYh6Tgqj8WP3BuQIVdDcMCE1UZ0vLj33hysGl3FkHOVT4OECuc3uqASsNumTano0uigRiyv9H66ieBdOtbCWC05iWnAar3EGV1eKKooMeXBVEmcGMkC2g5o5tA0aKQp9YDDhMH4mO6dFFBEkk/zG2GGdXQrSJXJBqMdUVKm6er2CPESnTsdm1i7fa6iyU3YGcknrhrH3H7dTNF2qhgqFCFeTjH2XJbNteVQBcXunrwJEvcDXUq4FSipXuLUmsmliXF+XapBfDzvgesSn7hZQOCNo+S9mmaKvF5vbQUKWiX7entdTijEhoLlzs/v6oIVH5SE0BjtaN1QBuuws8t51gOQx8sDmyMnGzgcVgZ4rNYc3pbDZNt4ezgoQCPOWBjKh7FPRPs5L+Ag50dgXBojll9Z4iwuzOupAGXYgzUVuKHPiEhYQtFypBA3jDgnYp0n66+gaN7LoMxZaRxwMBxrYzia53mbjeOsYDVZm0mZpmSIjN6dsd/ANmKFQo5OJbctaZuVNPbM4A1nzY+r5jSPtMbPVh1t6y9Z+EfmO+cfLnseFT32kye6eIj9q0JZ8YE++NnIse8FqmvptuM3b2RoTpw4+eLAK2/82dXx7dlrhzF+aeX80PBx+5Lfd33E5H2Y8qr7ibDwCPtspnnuvPDuLRXMiSrzTHPi0ecqL8zf6Gt4b+Cfhjc7Ftxa6a2YvSzhufXvdfv6yzd/3lTUkrJQmPfVly5r+p32XXuNNG/hby2uO4ODm5uzv7g+6/y+1rLXryWX3Fl7sNBYtWjw0rn3q2K2/Nuwdgl/Ib/UdyO5c1WBa757755Oai5dr2+mbqd6n3p8x3B/U//w38sDKSlPdxfO7sx0preAupnh1NP5zRdTnvHSja2dPyxa8e5q1zeJdZc6Th1IpDYUFPVsDLLmK/7XZgxc6V56pOrYiau754TPlL7N76P3rBWXFaf/Up16OTPtu6bMWKp13VXfmdmhxe0bXlhUe/J69HSf/WA4vvjr7gW9/z35K27I4EZ87Tcah48frhohqo+MzEhKOed+i30oKel/M3dwnw==

View File

@@ -1 +1 @@
eNqdVWtsU1Uc3+SDssSoUUNUkMvEaHD39t7e29eWGreWsSmjZS2w8rCcnnva3vW+eh9dO/CD0xgRFS8xSqKMx0qrdQ4IExA3o6JEIxoFNRlGo4mPmPhAIxFExXO7TrbAJ++H3p7z/5//4/f7/84dKOeQpguKXD8syAbSADTwQrcGyhrKmkg3Hi5JyEgrfDEcikSHTE2YWJQ2DFVvdjiAKlCKimQgUFCRHDnGAdPAcOD/qoiqYYoJhS9MPLahUUK6DlJIb2wm1mxohApOJRt40agKMEMAQgMyr0iEbEoJpDU2EY2aIiLbbup4/cA6vCMpPBLtrZRqkCzlIg1TSyi2r4x3GfzWDQ0BCS+SQNQR3jCQpOKWsKMdi6boB8ppBHjc8JZiWtENa2RmC3sBhAhHRzJUeEFOWS+n+gW1ieBRUgQGquC6ZVQFyKpkEFJJIAo5VJo8Ze0DqioKENh2R6+uyMO1PkmjoKJLzRW7NxKjIhvWaAgX0drpCBcw1jLBUJyXovflSd0Agixi8EgR4HpKatX+2nSDCmAGByFrPFqlycMj030U3drTBWAoMiMk0GDa2gM0yc0dmL6vmbIhSMgqB8KXpqsZL6ZjKYahPPtnBNYLMrT2VGk4NOMwMrQCCRUcw9pFj0zhIyI5ZaStIYb1vqAhXcWTgx4q4WOGqQ8UMRfo+Lvl2gjtDt03ReKXdXOKQcyLNR5Nm00E4yFC0CCctJMjGK6ZdTazLLGkKzocqKWJXpaG/VE8fHoSU7F4ivYyTJtyBvGVwGUJH7cJx93Y5eM5JVFeVXRE1qqyhnvI7kntkJ3BA5PTRSpaCshCfzWtNV5lvq8/38dDk+fTuT6J9vVzrJBAJkyO1o6ommKnwQWRkm4NOX3OkZplCvsK7pUmGZqkmSN5Eg86EgVJwHhWf2sC1q2ii6bpw5c6GEoGYamXObr6vD7dQ0MSJs3OfTEM5/P5xi7vNBWKxS4+j+fITC8dTa+GcUr64UsdaiF20/pwfsqbFHhrYiFexHm3h3ZBt8uFmKTXyXsTTg4kII+c0OWBnkTyVSx+AeIoNpmqohmkjiC+rYyCNdEkgbytMz/LuFg37rSFEGQomjyKmImgYvegtxCqhkQF8HsD7WQAwDQiI9X5s8rB2LLWrs5AJYKLDChKRkBbT9XPisdhMp6Q/HRHpA/mvWpvFHa3i+YyJpFd7PH0rHAFdfeSju5YXI9qHUIiLrRDkvE4fYzH5XKxJEPRFEMxZNCk4tlsXvMtWZ7N0HGxk5MKS3v41tiSsHul2C73JpS8V2hzxnrFMMoloRhd1RvvRRSr9MhdIOsNLA93t6+iY7Ivn08AaXkgkAyJrbgbYKT9jhYCz6aA8fXXFEJihZC2PlzN9JQ+Wgi+ioGfmnkbthAd+KIPyWKhhYjYYCL8BhKKCAbyL1NkNPE0xsDMCbyfW8kqYQlkc21L+6MhxpmRcj1UXBJXOQOY1tXBpaxB9QfYjmhvbBoINMaBruHgpjlvdQovlv4/qzrYQ04XPBlSJ79oZVnRZSGZLEWQhgVkVaComDy+2DVUwpx3t8asUS/P0ckk4GiIEAehm2zDV+ZUtP+uh6L9VSgDEc9YDloH0qy/sZnj2MYWQgJ+rxvLqfrde7Bkz6Sceqc+PH/zVXXVZ5b41JvLjtLXBn851/D+98Xegd1t8kfXv3DVplva7tS2dyXUsZHx1qcrF558viSFN/wKBlO5XN91cO2KsfVrDw7wKVO7MHqqaccHv60ZEz1bzvz00rl15IlDN95tHPkuq3715/UL561jN23/9J6GD+rvu3KAKJY2/0it+6YwePXOow3GPtczW5+LDYZ/+Wzb8ZNN87eTi2YPzvnRn3h8/xenH3myNDD//n2FjdHm0/dzb2w+ecexw4F5N1BjqwYfXnDrjl3BwPpHTv2QaJj7yficXdt+eOXWM38LW48da1573trx++qbvv38xIOn022VO0Y37jz/6P4tYOifeKc8L/fMvW856AV01zXZvzYenT33pgWz/daBs/f2xOpn7/n7G2Jtw0Sbmftw6L3ks7ftXTz3YKT0asemYwt/PjS4864Muln97MWv3z56+x8fz8fQXbgwq27pE9edBVfU1f0LdzdoqQ==
eNqdVXlsVNUaLzQIUZOnL/G5gV5G8OVp78zdZm0mtp12Smk7085MKYVoOXPumc5l7ta7TGeK+CLU5UUN3qYxkryEpe2M1AI2rYKFGtG4gGiCC6EaMS4Ed1/Ce7iDZ6ZTaQN/vfvHzD3323+/7/vOlnwaabqgyAtGBdlAGoAGPujWlryGuk2kG305CRlJhR9qCUdjg6YmTN+VNAxV9zkcQBXsiopkINihIjnStAMmgeHA76qIim6G4gqfnc5ssklI10EX0m2+9ZtsUMGRZMPms6kCTBGA0IDMKxIhm1IcabYKm6aICEtNHZ8231thkxQeifhDl2qQrN1JGqYWV7CebmgISDZfAog6qrAZSFJxBViKrSk7tTmfRIDH5W0bSiq6Ye2bn/B+ACHCHpEMFV6Qu6y9Xb2CWkHwKCECA43gNGVUhMMaSSGkkkAU0ig3Y2U9B1RVFCAoyB0bdUUeLZVFGlkVXS4eKVRDYgxkw5oI4ySqGxwtWYysTNB2F22nn8uQugEEWcRQkSLA+eTUovzQXIEKYAo7IUusWbkZ431zdRTdGm4GMByd5xJoMGkNA01yceNzv2umbAgSsvKBlsvDlYSXwrF2mrF7xuY51rMytIaLJByYZ4wMLUtCBfuwdlH7ZvERkdxlJK1BmqGe0ZCu4j5BW3PYzDD1LUOYC3T8zXypYXaHG2dJPF1241At5sWaiiXNCoJyEc1AIxiKcRK0y8eyPs5J1DfHRgOlMLEr0jAWw82mJzAVdbO052HSlFOIHwlckfCpAuG4mkL6uDFJlFEVHZGlrKzRtWRkZlLIhtrxme4iFa0LyEJvMaw1VWS+pzfTw0OT55PpHony9nKsEEcmTEyUTFRNKYTBCZGSjsHxuPeVJLPYj+BaKZKmSIqezJC4z5EoSALGs/hbGlfdGnJSFHXwcgVDSSE82HmOKj4vzdXQkIRJK8S+5Ibzer2Hr6w064rFKl63e3K+lo7mZkMzkn7wcoWSi92UPpqZ1SYF3ppegQ+dLpZ1cpDj4jDOMF4WxIsviEpAlmHdHs+LePIFiL0UyFQVzSB1BPFuMrLWdIUEMoU587O0k3XhSisJQYaiyaOoGa9VCjXolYSqIVEB/H6YICGASUTO9J+Vr+0IVTc3BEaiOMmAoqQE1P/hgps6O2GiMy75s/VaeyYRDMqNQTWaFnpqBb7V093U4wm2tm2sFzrjmmK2dToTTrWDpN0czeBkGSdJ2yk7nlIykIRM1mzqcbbQNQ0ybBfYVLfQ3hRKhRqYWF0rC72U2KyC1WGvN9YekutWoTrV69WgnA5G9HgrQJH6SMToMatbYyiaashsVJyt7cy6auhpCTe2grRKi71rA5ybZwolAiPpd1QSuGEFDLq/NDYkHhuyMDRuHzU7NJUEXwTGb5+/IiuJVXjXh2UxW0lECwgj/A8kFBUM5A8pMpoewMCYaYH3h1J07cZYPO10rWpLsiE+HKrrCNXx3XxNZF04jbhwd2e0c218DdMA5yDjoV0kVQLHRXGeYmteSv3/zOqFteTcLUCG1ZlLLS8ruiwkErko0vBUWSNQVEweb3sN5QJBMlLdYU14achydJx1JRKUh6dpsgbv0Vlvf+6MocJVkQcibrw0tMaTrN/m4zjWVklIwO9x4RkrXn0P5gqNKne9tuDB2x9bUlZ8yh/vp5VXqOse+vG3q9/Sq1YM/BWlFr+/q+m7qrY2a8zxrz1w/flFNyy/sOlMX//OyJ7Gh3/9furwoXO0rW91tV6TfftJ95rm28a/PSMcuG/42QPfgnOTnxw9++rmQ/cN3Lqt6vOd1EfL1N9am18URhcOBH98unLDkiPOjtMrv2Leerdu8S13VC3qQN3MDvudpyb3bj/eb3SvORnU/sP9/Yfrl/ctPaS8+cyi+0/8+/hnO1aXnx6/OvmAYOvzDdY8xPwwXJ+zXi//InjHl/b05NLyxeibjiW59cPv/O+9M+FjsZPb995zrnFqYN0vk+rLh08c+WBwbGIw/+iT2+Td5+9a8fw73N+u4Xbs3Ar6/5s2Tn1a9h4b2Nr0BHfup4c7lpdtjz0wcd2xm/dXX3sMbBq94d3Hq7hbfu55av3p309F2i+O3V52dsP1tWDZkqP7zy77cGnf4vPtH6HX7t3w9ZHvVj6ycOVB9aoFK1Nt/+z9+i+37frH6j13qyePfn/h4+3hE26M9cWL5WXjb4CnLiwsK/sDSUeFGQ==

View File

@@ -1 +1 @@
eNptU3tsE3Uc3wRFGLKpOIcQuTTDsLhre2vXbSUIpYPNwVjdinW8xq93v/Zuvd7d7q6jHS66Bw8zEjglEubGa31R92oQpkNChCwBo5g4CA4nM8BgoCQoII9g8NeyIQTur99935/P5/utD1VDUWJ4LrGd4WQoAlJGP5JSHxJhlQdKcmPQDWWap/yWkjJrm0dkBmbRsixIRo0GCIwacDIt8gJDqknerakmNG4oScAJJb+dp3wDO9ap3MBbIfMuyEkqI0Zos/SZmGosCFlWrFOJPAvRS+WRoKhCXpJHk3ByzGQTGRliAJNoXpQxgYduDNh5j4zZIRAlVe2qWDGegmwsmGSBh4K4DqcB4/LgWaiXVqfNiZWUoVtA4GSPGOukVWtrQzQEFIJ+LiHFT/OSrESfgtMFSBIKMg45kqcYzql0OGsYIROjoIMFMoygOTkY50uJuCAUcMAy1TD4MEvpBoLAMiSI+TWVEs+1j+LCZZ8An3ZHYvBxxAonKz2msTk0Fh9in0Mj6wxqbbcXl2TAcCziD2cBGikoxP2HHncIgHShOvioskrwYXLn4zG8pASKAVlS9kRJIJK0EgCi26Df/7hd9HAy44ZKyGx5ut2o8/92OjVBqHOiTxSWfBypBByAlWD0EcmPUiJIKx2uNeBaoueJ0lAWfTjJow7KHm3nGIEs5JwyrbQR2bqwCCUBLSxsCKI02SPV+5FY8IfjodEd21uyeEzqzf58JJty2Ep7MjEiByshZSy2JBihN+qyjHodVlBsbTePNrE+U6WoVQSc5EBKLRzbihBJezgXpCLmZ+5DZPSQcIZSvkXvCi2xoDTHvIQp9HIOU3nVkvwam6m0Sqo84MVJlvdQuIyuEOJxsF5ZGcD0kIR2u94BCSo3R59DGewIuJ7IgiBP58jNJtuqGaBECDWBOXneycIu8yLcDEga4mVxSpRQfvlSU/G75vYP8FLezssSbgVOxc/xHAyWQRGpoETirdFeizCI0ktN5cpXuZRe63BQlCFXn62zG/LwBWhdxuh5BN8fO4r4tdchCURk6ku8PrPpxYT4Ny5fKeF/1U7uOzJpz2J10VbDsUl3vgYf1T23SjPOmFzH9h9MbbpZWLna8ueh5KLBa+lba2ruz7060HdymfqKMDg02NJ//87N4VsTh2rfPHvst89Gzgxvv3HypC71tciGxNa1R8uY8afDqcdb1n4yp2Xl8NmMi1N6Nl5LO5/B87d7uv7Z+G/rjU6YlS6d7rn8esGxoeFrw+HBgqmN4ZT5BakNOwgTnTtdn5b/ZXNlb/I8C37wzOcTssEbmGXCW9Nqi7fMKPUnsvkj5ZbiuZlVV9V7V6eszZ57vf7nyxMvdYc3vpM0+3a1oj+adLf5nFhU13t+Vlh38FLTlWnpJd7ozEad59DRWztnTV54dv/8BtXLTfypqP70eHNpRX9DN3C9vWZqrhT8rmth+oHCjoyWFMbWXztj0ZwjjS/tDrxA0sLFfTeya4+betMK69btXlPeza5PSqCydnZ+vGukfmVq3YJw74aMmb90XOjyfT+F6HP9/eFfrfYt09eX2ye9OtJ878d5EanSey+5+73C509tjxoLMpYC0ybbJiI0f/rh4MA384YOl124Oztt24llU/bNWZ4q/rH+/E+bb9pO7Ct63xfIyzMWBX5vpJd1bNthsyx/peLTXa3NtnYK0u1t2yqW16sou3VFEpL2wYNxCaGA84vLzyUk/AexLZt7
eNptVGtMFFcUxtCSKlqosRHtD4bVPmKZ3ZndZdndii0siiiPhd2UajV4d+ayM7A7M8zcQYFaK7aagFoHbNWaGpF1wQ0iBEWRWpVq1dZXGm3VWLRNg9EIplKD9VF7QVCJzq+Ze875vnO+79ypqC+BssKLwqhGXkBQBgzCH4pWUS/DYhUq6POgHyJOZAPObJe7TpX5i29yCEmK3WAAEq8HAuJkUeIZPSP6DSW0wQ8VBXihEvCIbOnFL8t1frAkH4lFUFB0dpoymuN1wyk6+8flOln0QZ1dpypQ1sXrGBE3ISB8kCfzCBKAUDhRRoQkQj8BPKKKCA8EsqJbuhDjiCz04VTGB1QWkiaSA3yRShoxCWWiEjEcgn4Jz4RUGXNQemppPQcBiwfuCnstwIkK0lqeG2IXYBgoIRIKjMjyglfb6S3jpXiChQU+gGAItyjAQZW0UBGEEgl8fAlsXUIqCPCCD89GIt4Pcavajqxsd35a+oczs4KPQbVmIEk+ngED5YZCRRQahyYmUakEnw+HBnQhsVgC0vYmD7dpcJZiSwSC0ptteqr5WWofwB0HpcF4x7MBCTBFGIccslsLPi5uejZHVLTtmYDJdo2ABDLDaduB7LeYR0wpq8LAoFq9w/k83VDwKZ1JTxv11pYRwEqpwGjbC4BPgS1PPHhSEsJGmkjKQlL03hHQEMmlJCNiBq2WahoW0AcFL+K0OtpsapChIuEthiuCuAypSkUAewlPHq8fWr1t2XOHN2F1IBW7qh1wc2o8QVmITCATmDiBoC12k8meYCLSMt2NjiES9wtdanHLQFAKsFMzh5emnuFUoQiyIccL1yU0dLtIntW+w+/5FO12leQIljkSci2WzB7RmZPqRIUZ7U91EWUvEPiyQdqBuotTTTaLKYE1eUjoKWBJs82aSNpsRpr0GI1W1mylE82spa6EB1qI1tOEVxS9PrjLMYt0AIaDpGtQGq0+dV5Wcma6o/EjMlf0iEgh3cCrBQRRgEEXlLEbWojxiSqL11+GQVyemzxP222jGZOZZm3QVkCbPFYjmYLXZlimJzIEBu7O4K9gObZCxkdHR/XGVr0SNviEs9XZ4jkq+ujBn45P1Hedb+uYsP4N4u/sadOkDyq2flWTMb71RMbGSa3jzn+6+K/dkSHrvC39fZv7p15pv7Cvc82lnsSJefce9N03xAp38vqLux72HuxtkqZbb/m2UK6GyXbUNTqqNNpdN59QutZcvnp1TF5UXMK+S033oqN4tb39sr9ts/d0xzcn1p18/Urv6bUr9/7bI0/oJcekuF/ZmlR1+Ig1pcE6yR2TWlU9Z/+r7zvJX/unRBV2nkveGpEx/l71jOKYKeqipJSfKwpv/BGzqmD5/OQFG9APaest46gv6sq4ms/kosjc0bEnNvxTeYx21hx4Se28NN3ZPPVuau3ps/a3y06N3z92T7yzdvaRmzF7zjgSDlMhw/VZF5rbVnE5i7pTF76VU7jAEhvdek29Wbb5zLhw7vsVtPtC39m71V0BInNs4FBPrrnfEee+s+74Eu5r3y0uJy2Ljasua+iG22I+STf0RRxq+bPgv7zMa8U3NnWr6Ya1+yoDrvUb61aWvzP27NV1jm+THngqQ7MTujOWnfpx9c7fr3csLpze8/IKY3tnXEL53JTeZOeDTbeL8pyTI279cvu3yvcm8nPaEmcQO97Vt3Wdr+0tzo+MqMq4f+xQd2Pkyu5la2qS2srL7A9HhYU9ehQepj6sCEaEh4X9D/5zr+U=

View File

@@ -1 +1 @@
eNqdVX1QFOcZP9EkpU1rRDM41Oh6UTDIHrt3e19cz3ocCIzCAXfgSWKve7vv3a3sF7t7B4ehMWiNtmPsoqnVdho/4I4wfEigiDE4dqqpydBMJtEOpB+2aWNorNJqM62Jgb53HBVG/+r+cXvv+z7v73me3+95nm2NR4AkMwK/oJvhFSCRlAIXstoal0BDGMjKnhgHlJBAt1e63J5TYYkZzw0piigX5OeTIqMTRMCTjI4SuPwInk+FSCUf/hdZkIRp9wt0dPzwTi0HZJkMAllbgDy7U0sJ0BWvwIV2q8QoACEROSRICiIKgENIvxBWED8gJVmbh2glgQUJy7AMJG3LdrjDCTRgE1tBUUENOiOqhCW/kLDl4S4O37IiAZKDiwDJygBuKIATYXLQMIGF6cwt8RAgaZj6wfaQICtq7/xk+kiKAhAd8JRAM3xQ7Qk2M2IeQoMASyqgC2bAgyRValc9ACJKskwExGZuqadJUWQZikyc5++QBb47lTGqREXw4HFXIjcU8sMr6qALBuEoy6+MQtZ5BNcRFh12ugmVFZLhWUgjypIwnpiYPD8390AkqXoIgqYUVWMzl3vn2giy2lFOUi73PEhSokJqBylxJmJg7r4U5hWGA2rcWfmgu9ThfXcGHY7rzP3zgOUoT6kdSRnOzLsMFCmKUgLEUE9gvbP8sIAPKiH1FE5YOiUgi7CGwO4YvKaE5dZ2qAUYvRxPFdNJ1+ZZEf+oyWwvgrqoI55QOA/BzYiLUhA9picQnCgw6AsIK1JS7ul2ptx4HipDv0cieTkApSielT1OhcJ8PaC7nA8VfCQhOMwmET6sUxQ0iYIM0FRUarcXrZ7pIrSsaGCmulBBCpI805x0q44klW9sbmqkqTBNhyKNHGZtJgyMH4SpwGDqiigJCTcwIJST1Xaj0WLoTR3Nkt8Fk8VQHEMx/I0mFFY6YBmOgYQmf1O9nLiLYdjwgwaKUA9g18cJLPmcn2shAQ6qlnB+H4awWq1vPtxoFsoATaxm4xvzrWQwNxpcz8nDDxqkIE5icnfTrDXK0Or4Grjw4Sag12MGvQGzGHHaBEgDbcIIU4DETAHcSBjOwu5nKIiSUFOEYwWVAQUHlxJVx/M4sinRaHYDbjSYYKY2hOEpNkwDd9hfJCRykG2IKAFWIOk+5ybUSVIhgLqTBajGi7ZVOMrLnF1uGKRTEOoZ0PbhgoU+HxXw+Tl7hc9fy9T5S32F1RhR66T1wAvcmLs+Uu4p21Fo2rqNrm6y8GwEgqC4WW/FzUajyYriOkyH63AUc7DeUl2DvnSrt4hqCoU8ir6q3IeZiRqm1gkUhfYKdbUyXlYbDpbiUV0jHthSEyqudkTDleEStqS42FVY1iDpCjlno4Eobja5OaJGCMJsSCVkz7chsDgZyK891SIobBE00SDGAmy2QWwIneTArps/Dm1IKZz5Lp6N2hB3gkwA3yQH3HB42ysEHowfhhyEIwxtr6PqTMXVjUa+wRgtD9Nuk6NeEvVNIu4rM4UcZEUVtQVgeKPZUDSXBINRj2IpHqCYlmQV3g/9/4xqyIvO7XjUJc583OK8IPNMIBBzAwk2kNpFsUKYhpNdAjGoebVjmzpooQksEAAWykpSlgCg0UI4M2fR/jcf2hOfhTjJwhqLUOpAyGDXFhCEQWtDONJuMcF2Sn4CX4wlapIPXkpzrfrhVzTJZyHb5hJ+hz1+afLusj35IxbH+wh67Nl+cuMZ8saUxj7qvHB5sHtT7Dm1N3PZl7d2r4j/7fR36F2uyWBbs2vprtFXxrP07lPHYp99cOPgq6vOD37x2NkD525tn75564aTO3IzJ2jbqX789pKhf9P/mjh29L3Bn195ZMPl7IzctZ9+dPvejoazddvzHjuRvvYuPdBTcWTgqufc8Nc/uPvR88aCG396vGxor2+pZlf8z1vsUtkLrxtH71Adq5cMHu7x6RbQ+H88b1de3K+t//GrRUu9+18aEpYPZncWrqlat79uo+5E4/nRr02tPL1k4/LrE1rHx+uysnqJd48MtE7+7LeV4SWVxy8Ob7721it/Z3sQW0lb5r2B7LGMTRPq5h+ktUifdcZ6CxYP5hx98rl/LD7w4s19Z35xSx471Lq21nV89V8fufbWcfpK//dbJsZWejd8K6NUc+Cp33z1PDnx1PpLhgVu2+Qd4vrKHWc6Lncezry3IcOUvjT3R6aCoWDa2F9ettSs9xQQT144VJjz3nefX7O4M/26dmxfttp7dk/p+23W3Nbs6vSekkPIqsJFR0w3F33+RIs1svugQlxZzvx+2Us98RVZb+ZUBW83ZP7q0WDfNbz6anpsRcMnX3pf+F763sVDB8j1p6aq+1bYXjOrlWnxJ/btNX3zyifpFyZX7/rDt4++8+HyY7XmrMjF9GHvSnH8mds5//zUv8y47vOtsbs1U4/y7raTd27n/vKow+2Ovv5uTl/gi+m4sPobd6xD01XZTz9zaGSd8afv9Hf8RHv819s3uKbSNJrp6YWakrrXrmYs0mj+C91Z35g=
eNqdVXtQE3cexwd3Fp8z0mp91DTYuaGyYTebhBBEB0JA5CmJAvYc3Oz+kqzJPtxHgOATtVpRe+uJelrbUZA4FLCeUK1o1XpqK56jrcphxbNnLa3a4mOQ01O8X2I4YfSv25k8dn/fx+f7+Xy/3y33e4Eg0hw7oI5mJSAQpARvRKXcL4CFMhCllTUMkFwcVZ2bY7VVyQLd9q5LknjRFBtL8LSG4wFL0BqSY2K9WCzpIqRY+J/3gGCYajtHlbbxZWoGiCLhBKLa9F6ZmuRgJlZSm9T5Ai0BFaESXZwgqXgOMCrCzsmSyg4IQVTHqAXOA6CdLAJBvXhejJrhKOCBD5y8hOAaPSLJgp2DdqIkAIJRmxyERwSL/S5AULCsD6tdnCgpDf2B7iVIEkB/wJIcRbNOpd7po/kYFQUcHkICtRAeC4I0KLVuAHiE8NBeUPPcS/mM4HkPTRKB89gFIsfWhcpBpFIevHxcG8COwNpZSWnMgSCS0mNzSyGjrArTGDAN9lkJIkoEzXogRYiHgHhq+OB5c98DniDdMAgSUkupee7c0NeGE5XdWQSZY+0XkhBIl7KbEBiDbn/f54LMSjQDFL859+V0ocMX6XANptUY9/ULLJaypLI7SPmBfs5AEkoRkoMxlJ1oQy8/HsA6JZdShWHaPQIQedgfYEUNdJNksbwaagHOfu0PNcqunIxeEa+FjalOgbooR2wuOUaFGlRZhKDSolq9CjOYcNyk16vSsmx15lAa2ytl2GcTCFZ0QCksvbL7SZfMugFVa36l4EcCgsNqAvBhGyKghOdEgIRQKXUFSN7zCUHSU/Y/7y6EE5wES/uCaZUjQeWLfSXFFClTlMtbzKDxPh1O24FMOhpDLrzABdJAQAgjKtWYNl7bEDrqJb8WFosiGIqg2KESRIBceGiGhoQGv0NzCn31KIoefNlA4twATrRfhwavL/taCICBqgWSvwiji4+PP/xqo95QODSJj9Mf6m8lgr5oMC0jHnzZIBRiFyrWlfRaIzSltE2GN0V2vcGoi3Po4wzxFEUSqENH2YFdq9WSlFZnB9QXcNBpEkYJqMnDpYGIgIRLSSpV2mIYoiQwaIk4pscNsNIEFc2SHpkCVtmewgVqEBNUvAA8HEHtNaciZoJ0AcQabEDFn1KYnZSVbq61QpBmjnPTYOOVAWOLikhHkZ1JTPXN0ZuLc7LF/NI4S7bsJswsmeop8aVnWniLU2P1YLIXnSkXoq4sBIvTYdo4oxHXI5gG1cAxRdLlTAnLTVvAMVq7MX1uqY0pQnNJa1pWBp6Ke6y0xpuRnikasyyOvDxZnwL4gplsbo6DNWcnZZC2DLMnUwDmghRNYUlSqV7rnTXbrc9cSBZl0njSAiHdLhDOTCLeNzfXkFIISyQkV2Jsggp2LA1JTwzNDQLnBglMTZwJ7Z2aBBUVJCZR039HJqhmwCWfw3pKE1TWAMMA/hIMsMJ9nZjNsaBtEyRG9tJUYkmqRqPnM7xakSWtC0k+2UX7sBm+uZTb4mSsKTZu4RxristamGec1YcZFOJBQ+QYUJ0x2JovoP+fqD4vQPquASSHf/4287OcyNIOR40VCHCqlFrSw8kUXPcCqIGNkJdUqDTGYySuw4ABcziMuN1IIclwkfZG+9/SqA68K/yEBzael1T2u/BEtUmnw9UJKoZINBrgjAXfectrAo3KOk8OvDOpYkhY8BoEP8+erbN9v3Z82qjFrfmRD6ZOefivk7cXt4xb9Jph8N7jr80aVf/buKzT7bPNs/+efcrx9lHlny1bVrLfbNxs/NOYebeExx9Z5z2VB5z/2/mryTfKnNNG/nLnt59+6vF3c3fbbz79w69LlnTd4HvugGc/nGiPfXhf/m7ziGtPNlz7UkYSO97eaamZd2Wo6US5796TtpM3H2zf/umn276egFZ+/ut8Z37LbXw8OOHsGLOl9T/nPq5KY9gfWsPDjrU+LjTmHZu+7T0jd9SirT2EGA6sVi9LNVbOuGGr2hblvkJ+i7+Zt/vp5mO+lctWdEVcvbR04tDapvmad/jI+VM2JDfvmqQ6zXC4jahSDWmua4qOj2h5ffHay+a4CGP4zXNnDgyaKKYuX69acnxgyoCLg2l3YkHnsMNR6Fvvr5xfvqfl9fqfR+ywzdn+45OOmSBq6qyn+q4LEZfLlqxpDN9hd9LvH8yPxCnHzFV3BcuksnUDjVVf/Xi2u33rlbyx4drp81O14VlTm6b4Ms4nfJOJZ1pGzK5f+uzRluQtJbUXpz005nd1NZ6N+ktFdWXhB5Gyxt28q3jf/RsXHnZkYavemGk+FbbzMlLxhaljuH9X9/Ws5aMvblVvPD0x7vejMyJ2n+veIHdmZEab7hf80o5P1o1tkkb+fLtt7erj0Wv+XN69FnCFkWMHN308aHTymOhlK5KursubQM24NyCja3TxmXeio+p6bEzzowjH2fGbFpz+XbtL3LnIXtNJRrR8tz36H+tvnXGc3/rttJ6Sg7c+OLDibmr3I/PhisimoqEm8G7r3PWraxyTxoQdvVQ57hNPOj6k4/jlU58oEddPXhrREK+5R98/Oc3t0eyY3BD7fef1+kHbKtJzNv/13rnusRvqTw/3kmsWbSz4aE9P3I5OonNT2pWeSf8mvyqbMOyEb9SwC9MrF8WMjvnw6tLKiX9sWDneUrrmrdY3Zp2ZPNDLZjZ3+C8UDn9wqKLR8ubky3F7sh8vvDcy2NKDwoatcuwtDw8L+y+SEjoQ

View File

@@ -1 +1 @@
eNrtVUFrJEUUZvGPFIUgyPRMz0zPTNKSQwhhV9wQxSjIbmgq1a+7a9Nd1VtVnclsmINxrx7aX6AmJEvYVQ/iRRc8evAPxLv/w1fdMxuNkV32IAgODNPzXr33vvfqe18fnx+ANkLJW0+FtKAZt/jHfHF8ruFhBcY+PivAZio+ub25c1JpcflmZm1pwl6PlaJrCmGzbs5kyjMmZJeroidkok73VDz7+TwDFmP6xxcfGdDeegrS1t+7002cV856frff7Q9Wvl3nHErrbUquYiHT+ln6SJQdEkOSMwtnrbv+jpVlLjhzGHsPjJIXG0pKaDDXF/sApcdycQBPNJgS24DPzoxltjLHp5gXfv3lvABjWApfb7+3BPf5yceC1RcIg6RKpTk85QonIa1nZyX8veQZtoIzq8+rA8GVlj84bMZ4iMRqlXtb7NB1Wp+Mff+na771PFdTb6sZqKm/evubjUWpuyBTm9UnwcroxxtjtrVIhay/vCQ3ujc0xJhHsNzUp1ZXcBrj2OrnO1nVIf0J2eaWDPxBQPpBOByEwxG5vbXzjDOegcfbTPUTqbzGcr6eW+/DA15fdrPhGg2DYEjfIQVbG4xWB77vd7KhN1i9wfH8GrbNw1IZ8O60g8Z+b57Hlb+hzad4Zxo58Nut34/ogp00pH530h0HtEPxNgCvNoLDUujmXiIrCqChrPK8Q/eY5VmE8UjeCHtLRErDI1phRFHlVpRM2whkXCokPA3dsDrUcJZDVJXRQyMeQYT10xQ0Dfuu3SuvtJlGsCbKBRIY3eOlM1ZTGUkoSju7ig7Q69ItTze5XhiivZkFQ8OBvzrpjwb+vEOFRLpKDhGyPjUONq4MLqWFiIkI91HPEDrbyyFeIlc6jTiCauYQC7NwJsgE11emppG1eVSJZYDFHccOBegorhbzi9msqZYrmbr9wQRBAzZT2i4M/QABGmAap3sNw1TpfVO6tIarEiKHScgD0bS3RDKMjFUad++v0fP5P0vN2sukBr9oNT2dFz1M7ZVa4Q30nGQY+78GvZ4GrQTBv6JBwX9Bg964e0RblkUZMxnq0MgPggFL+sMhjPujyRgmwWg45uPRaMwhgX7CeD+YcBb7w0ESDCeTvbHPg+EYxjzme2NABSuYFAky1K2cwD24R1/QGr0tiQ0+ocXizwb+vN8Yd1BgHBfpLsogx53EdUaCICocICKuOK4YRuxPmW71Y8E1fL73SrXuVAhuqw163Zpt0pc1tzjVoa9bxi4jQvqJqgjTQJgkzBjhRNSSRGnS6ApujcekmYK7UWKZ2TddgmpAbAZ4yt2/c5QCkBhEJUQDXj6g6pFmDQ8tsYq0GZqYZdYueTchM6wdK/mWJftSTRt/e7RDHlTGEsNmaGT22sElAg1ADLgNdMULdiiKqsAMMXFS8qd0DgsXBrr35QeL+iE5WkKZk/tyowWL1gVsZ1xvgkPqXi5lZaMDpoWTX8cIuox299+GuPEvBxvhBAtkRUgTr10HOsfP7iunmuMbAw4ZZmvO7M7/ACbG6BE=
eNrVVk1vG0UY5uPGkV8wWiEhIa+9ttfr2iiHKFRtoVGLaqqitlqNZ1/vDtmd2c7MxnEjHyg9Iy2/oCVRUkUtRQVxgUocOfAHwoHfwjtrO6VO2lThhGXL9vs1z/v1zN7b3wSluRRvP+bCgKLM4B/93b19BXcK0Ob+XgYmkdHOhfODnULxww8SY3LdbzRozus64yapp1TELKFc1JnMGlyM5O5QRpPf9xOgEYa/f/CFBuWuxiBM+bO1rvzcfNLw6s160+8+XWUMcuOeF0xGXMTlk/guz2skglFKDezN1OWPNM9TzqjF2PhKS3GwJoWACnN5sAGQuzTlm/BIgc4xDfhmTxtqCn1vF+PCn3/sZ6A1jeH7K58twP391vu/2fBauxjMKJm6q2kqx+56lbcuH360+wliKJ8PkqJGvICsU0VaXqtDmkG/3e77XXJhffDriTGuKB5zUT74acAzTGtJ+mSNsgQWLuWzvBhidjWS0S0XQa4E3v5qatxrm6w8rCftFafv+23nY9SvtDq9lud5taTttnonKJ4vwTm/lUsN7sVZzpjTyTm/0O9ep2pS7s2g/mCtsHnuZRCxScqdoNv6ZSnAOoLGDqPO83auc1oeYGdJLGWcwtMbrrWuYHDsTfnQ27uqaJzR8pGQLrNleHbDxTLTSMbuAMcQ3EtReUg6TW/YG3UpbfqtJgPai+g51qI96nm9YdRrHZIT81hTECFeTlNd7hpVwONFBoNJDsfnaH8BbN7kJvmUCtLsdT3ief3qbZtcjfXXOFMKm/nXOw+2nfn2OH3Hq/fqncCpOVzgzAkGIY5urJ3+tjNM5TDURmLGEIKgwxQip29h1ZZ1WGzAYNfaGCjCcmgwIWzRLE9Bh1mRGp5TZZaDnG6Bq4fLbSCkPMS9VpNlA6nikCmoShJGXM+VI6wganM6ybB6y045Zi8FTUP01se9dPtVWWugiiXHpIkch8akYcEXImNHITQcVBgVao6OTqqyplLEdtvR38dVsO7KzAVNf1pzxlJt6NwG0EzmYFGGXGxyA/oI411tohBpK8fu206+jAmDDKlBpNhvJEM0FCMe28MLDS9VO8olEuhRJoymEBZ5eEfzu4gftygGhbC8CuhCK0yCJY90mCI9oHMzWCgjORahgCw3kxfePmptuIV1FetIEA4nVWItr9dtdlredPreq1l85TQWxw9KdUOlWQM76OYKa2Qalo21+R/R+7enkftZiHs3eu2FEFiuODN1P2ZzqjInUtV/ZvZjZH7O959UFOyy+U10RMpvSq+vuwz2cDiQJsv9YpMzqcTy5fAyqb57eduZzV6YUJ0gF3Y832/RUbPdhqDZ6QbQ9TvtgAWdTsBgBM0RZThijEZeuzXy293uMPCY3w4gYBEbBoBMmlHBRzi3dnE5rvZN52jYUTsbbY2/UGLwaw2/rlbCAW6gnVDnds1JGa4cMhJ2BVFhqRBxwZDf0GNjTNWM6+cTiL9vvtFZFwsEtz5zOuuZs6CnJTe3qjlnPcYsPPrOl7IgVAHBS5IibdoLz5CRVKRiGxxVlwo9BttRgpfYhq4T5AhiEkArO0JWkXPAoSFyRBRg8wGJm1Szv2WIkWQWofJZRK2TSyMywbMjKT40ZEPIcaWfmdbIV4U2RNMJCqlZMlwgUABEg90Aezg+avGsyDBCRCzB/CucxcK4hvot8fn8/D7ZXkCZkltibQYWpXPYVrhaOferB4G8MOEmVdzeKHYinIW37f/MxZZ/UdgQK5jhVPSdkTtbB2eKr9tvHGo6ffEsgDa3p/8A34xtTA==

View File

@@ -1 +1 @@
eNptU31QFGUYP7TMCRoZnWkmamA7hj8y9thlz+Ug+8BDTAyOuAMVMXlv973b5fZ21913CVJHjtLGUpu1j5mmsJLjLk8SL3EcE5uIGG1qJE0swhz+0JE0p8bJyGaQ3juBYHT/evf5+D3P8/s9T2u0EWq6qMgpnaKMoAY4hH90szWqwY0G1NFrkSBEgsKHK11uT7uhiUM5AkKqXpSXB1TRBmQkaIoqcjZOCeY10nlBqOvAD/WwV+Gbh97dZA2Cpg1ICUBZtxYRNJVvzyWsU0HYsm6TVVMkiF9WQ4eaFXs5BXcio4TJAyWJCEICEA0YggBexUCEFwJNt25ZnwBSeCglAjkJGDwkGVIAYsAg83EdiqEKEnAIBlU8GDK0RBXKRm2JChDweOyLlvSwoOjIjN81ShfgOKgiEsqcwouy3/zM/4qo5hI89EkAwRjuUYZJrsxYAEKVBJLYCCN3ssxDQFUlkQMJf16DrsidkzORqFmFd7tjidFJzIiMzKPFU33kVTZj5mXcMsPaqENNpI6AKEuYO1ICuKWImvQfn+lQARfAOOSkqmbkTvLBmTGKbnaUA87lngUJNE4wO4AWZO2HZ9o1Q0ZiEJpRZ+Xd5Sad/5djbDRtK4jPAtabZc7s8AFJh/FpkqdTYlgrhqRYkqKPzoKGSGsmOQVXMD+hDk4RKEHZjwSznbYXfqpBXcXLCl+N4DRk6K1hLBb8/lR0cr/2uVZNSb0zXIJlM094BCOXoAsIF4eIxJIQtL2IyS+y08SKck+nc7KI554qxT0akHUfVmr51FZEOcGQA5CPOe+5D7HJIyJF3uzB7w0UXRqQ/QXLlWoBVi8p4dwVqwur1wjBI00kJykGTyJ8gZBMDtuEzCGCc3gdPtpBF7KQKXSwNM+ydi/DUpwdMA6Gge2NIjBjtI0m/Iril2CXs5R0Ak6ApDtJiRktWVtRXL7S2bmGrFK8CtJJD/CbYVmRYcQNNayCGUuWxnutwQhOrypea3Y7eDvl8/FLHLCwgPGyheQyvC5T9EyPH04cRfLSQ1gCDZv6U4ysN+dbkt/cF15cFRh+Lv32k28f+8f66LMPzlu/rMxdd6q3+/XHrnrZX8yX2wd37nv/yIGMiTFrR2jvvC+vpOn9o/FvalwnLzdMtGX9PbCh5vpP1984fy06/tburIn6EIvssecH2+an/p6zq8W9q2H3Q6lSv/vrhQcufR695cusk25sPlG3tVZ+b13uoYWst+/GlZvZZ0bHRttKdzx1VYyMjGyrW9B1Nv3c2T7i2NI/Qx/1WV5q8i3qUh+pXlzdPqaODHiOdd73sNAycvqvvVWLhn4dvfgtjNT3Dj29ee617GYqVFbbeqE3p9byQIY74+Pv0syBPadN1x/B1J9XrjAyfyvOP7dnyGlX5+34MKObvmVvCRwufYYc2ZFW27r98px4Zjmz4AnWfn77xoKT3tC/++vlUO2NHwdTe8YXV5Z/wbkqtsVLvFu7LzC3s9+5mL9x+RqtYv9XS7dEeryXbMNnwo/X5K7+YE7O8MQPjfrx8fstlomJuZabLY6ysRSL5T/v1GTL
eNptVG1MFFcUXZUooj/aWpM2mjputZbK7M7sLAtLo5UuYlBZiCxksa307czbnZHZeePMGwTFJkLVVtRkaGPSxqrIsttuQaTYGENpsZZW0baoxEg3MTZ+tRqjqfUj2kgfCCrR+TXz7r3n3HvOfVMTq4CaLiFlTLOkYKgBHpMP3ayJaXCVAXX8YTQMsYiESGFBka/R0KT+2SLGqp5ltwNVsgEFixpSJd7Go7C9grWHoa6DENQjASRU9dettYZBZRlG5VDRrVks43CmWUdSrFnvrLVqSIbWLKuhQ82aZuURaULB5MAHZZkKQwpQK0kxBQLIwFQAAk23rnuPYCAByiSNl4EhQJqjRSCVG7SDEDAck0GgMAyrZB5saASfsTHrYiIEAhn2rOX5iIh0bLY9NUAr4HmoYhoqPBIkJWS2hNZIaholwKAMMIyT9hQ4pJAZL4dQpYEsVcD2SlrHQFJkMheNpTAkrZpfeQt8ZYvyShZ6ow9BzX1AVWWJB4Pl9pU6UpqHp6VxlQqfDscHNaGJUAo2D2SPtGkvrCJ2KBRjc7ptzL4nqWVAOo6qQ/GOJwMq4MsJDj1stRl9WLz3yRykm035gC8oGgUJNF40m4AWdjlHTakZyuCgZsxT+DTdcPAxHWdjHbbMtlHAepXCm01BIOuw7ZEHj0rixEiOZlw0wx4YBQ2xVkXziDCYDczeEQFlqISwaDaynPtLDeoq2WBYGyVl2NBrIsRLePxIbHjt9hQsGdmELZEc4qrZ6RONNIpxUflAowhxOsW6sjguK52lFuX7mj3DJL5nutTm04CiB4lTC0eWJsaLhlIOhbjnmesSH75ZtCSY35H3MoZd5k33ZLCOVeWoeKVRIvBVyFfqCR18rAvSQkCR1gzRDtb1z+LcLi5d4AI0DAQF2unOzKDdbgdLBxyOTMGZyWY4BVdjhQTMOGtjqRBCIRm28kGaB7wI6YfSmLGcUm92fp6n2U8vQwGEddoHQmZEQQqMFkGNuGHGeRkZAll/DUY9ufSy7FJzv5vlOScrpLNAcHOBTAf9NlmbEZkeyRAZvDtDv4H1xAqNHHWP2TyjLtky9IwTzOy6xILJGwY2x5fYvDeUlNTv0/79JvXTefv3NTm7l4YuHO/jj35xYhY7c6DzTGLD9p1J986e+qR3au3h8Zuq5asdqwsO35yy6djfV28+OHU+NLejbp3T3+B99dc3evqnT5y79IXLG4+6hfRm/5/UNqtLbEzddjrxbUaD/er9ew/CnS2r/Q1TSwLdlxK3jD0n4W1b06F5s5dD5kbt0ppdJ6fg+vfrT06/21X2X5K3smnS7+rO4lJ514xzXdePfl4/se9U0tkdf8HFiyIoeGfFb19PvrJ1fteE3hePrE/xv5zck3x59yuTLzx3PokHydX1lSnV1XmJrhW5V3q2zEntzQMTsg+9nuMoBcknuiceL9TOnR8/v/7HBb3tO2bidukitXHnJf9b14P+vqLlY09fkagD73YeStxJae8585qj9KVpsypatv90d4Jwr/gjGhye/XPKsX8+u7R5/S97WhYX7Gg7mCgRt97q+yMnt1a937r7g4sfb7m/K+/ED292XLt2e5rFMjAwzpJ7fZKTG2ux/A/oUIK0

View File

@@ -1 +1 @@
eNptVWtsFFUULgKRRxQwBDBRGDZFA3S2Mzuzr9aGlC2Fhpbdbtc+VFzv3rnTne68Oo/ttgQTSklDJMqg6A8CEbrsmlJasKQ8Skl4Ki9REyGVBCT8wMRQiCJqYoJ3t1tpA/Njd+45537n9Z0z7ek40nRBkSf1CLKBNAANfNCt9rSGmk2kGx0pCRlRhUsG/DWhLlMThpdFDUPViwoLgSrYFRXJQLBDRSqM04UwCoxC/K6KKAuTjChc6/CODTYJ6TpoRLqtiHh3gw0q2JVs4IMtIMAYAQgNyJwiEbIpRZBGgIgSRwRlKyBsmiKijJ2pI822cT2WSAqHxIyoUTVIxu4kDVOLKBlbGUtp/K8bGgISPvBA1BEWGEhScWrYMINF2b0b01EEOJz4rbzZyaiiG1bvxGT6AIQI4yMZKpwgN1oHG9sEtYDgEC8CA3XjDGSULZXVHUNIJYEoxFFq9JZ1CKiqKECQ0Rc26Yrck8uYNFpV9Ky6O5MdiesjG9YRPw6itKIw0IqrLhO0nfXYqUMJUjeAIIu4jKQIcDwpNasfHK9QAYxhEDLXUSs1erl3vI2iW/urAPTXTIAEGoxa+4Emudj+8XLNlA1BQlbaF3jWXU751B1jp2m7+/AEYL1Vhtb+bCOOTriMDK2VhArGsPZSKagoMQFZw7+Hw5APR6QSak1NC0x41KYQDJaL5jo60rzK7a5/21mmu1avCTaE9ZC2RoiEhXJI0m6Hl3Y7nU6GpO2UnbbTZJlpDzc3JzTv6urmGBUWK1iptbKeK21YHXDViuVyU0RJeISVjoYmMYDiPBRDdU3hJmRnlHq5CjR7fNWBYHkd1SB7E4kIkKp9Pt4vlhYTODozLnAlbC2jBCTQHF9Z2Rby046YFK+3hyWxzuEzlNg7ZZWMYW/zMWtCTQ3jwqNwhFQuQhfFeqjM0zvGDRHJjUbU6qJZ11ca0lU8P2hzCpfMMPX2JOYhuvJtOjdI+/xrn1J4XrIMc9IaCkXNAoJ2E35oEA7KwRI0W8Q4ihgnsboq1OPLuQk9l4KHQ3gEdR7TcNUY5dMwasoxxHX7nkv2oQzZcScz4eMpJVFCVXRE5qKyeurJ4OgGISvK+kcni1S0RiALbVm31lCW9S1tiRYOmhwXjbdIlLeNZYQIMiF/JHdF1ZSMGxwQKelWF+Nw9OY0Y7zrxrlSJE2RFH0iQeIxR6IgCbie2d/cGtOtpBMX+9izBrhfCC+8NJvtBnVqvIWGJEzYjO+nMKzX6z35fKMxKAabeN2uExOtdDQ+Gtoh6ceeNchB7KP0nsSYNSlw1nA+PoQB74q4OMR4XQxDcR7ew7p5wHo5ikFOnoGe43j1CRCjZJqpKppB6gjinW20WsMFEkhkdkwJQzsZF860mBBkKJocqjEjZUomB72YUDUkKoDr85WTPgCjiKzJ8s9KlzWsK62q8A3Uk+OJRPrV0e9FWlZ0WeD5VA3ScGOsbigqJoeXpYZSGCtY2mAd8XAsxfMR2gsgz0LoIlfiNTSG9j/tkplNmwYijj0Orf4oU2IrYlnGVkxIoMTjwm3KflU2pTK5yo3nJ4UWfTQtL/tMFrdflc9QszsetM64fG/KHGLV1YOd7T9rv+4Nbule5vhuxzHz5IVb3y86N1g3fVLw9PnOf97q7x+cueDusntL7349N3/5ga7Bb87zA6/evvZF38xPZo38JhzfffXA7XsXH6NzZy9sOjBru2G8lF9bO3kp+2LB69Pa33Du+Vi+K+zIr5i57Eqpe+BMkF2YP+cnecsP1MIlu/vLlpz+47OhqdvogdjtVNfFO7vS4oKFO2ecnD/14aNHW+mRgHvG/FvL/Wdmz7oW2LeYku7vEhZ56m+8z56vPGimH/g/Zeddv3Op5d9k387qU1dem/ZJquLPjs+PGouox7fk0JPlFw+evRQs/3t6avq2Nzs9H54Cs+LbtneMXKxILCm4vKnjx1Mts+1bChavKO4sVfovWzdsL99cWvNecsr6V/5asefLS9cvzf2lp+b+1ptrRzavwOV78mRynvbBw17wQl7ef6XuMgw=
eNptVWtsG1UWdgiPwv6gCIraCi2uQQiBrzMPj2OnhG5qO2lIY8eJQxJKa13fuc5MPK/Mw45TWtqEh4BdYKBQIfEojWO3aUiBPiClRS2IR0XDQ4BEgIVKkP2xoouAVVeobLvXjrNN1M4Py3fOud/5zjnfOTNUzGDdEFWlalxUTKxDZJKDYQ8VddxvYcN8oCBjU1D5fFu0Iz5i6eL0bYJpakZdTQ3URI+qYQWKHqTKNRm6BgnQrCH/NQmXYfJJlc9Nb9nokrFhwF5suOrWbXQhlURSTFedq01EaSd06lDhVdmpWHIS606YVDPYSbncLl2VMPGyDKy7Nq13u2SVxxJ50auZgPVwwLT0pEr8DFPHUHbVpaBkYLfLxLJGMiFWcpvyBDYVBQx5kuZ3jsV5QTVMe2Ih9b0QIUwwsYJUXlR67Vd6B0XN7eRxSoImHiOEFVwujD2WxlgDUBIzuDB7y34VapokIliy1/QZqjJeSRCYOQ1faB4r5QNINRTT3h8lJBqaa9pypMaKk/b4aA/96gAwTCgqEikakCDhU9DK9rfmGzSI0gQEVPpnF2YvT8z3UQ17tBWiaMcCSKgjwR6Fuuzz7pv/XrcUU5SxXQy2XRiuYjwfjvXQjMf/2gJgI6cge7TchjcWXMamngNIJRj2y1QBqWpaxPbXVVckEiiVSMr1uSa9ayDV2Ki0NGodGTEbEvmYv39t1t8Y6+xrEhNJXbU6E1yK03oAXeulmVq/n+EA7aE8JGcQFBCTs9ZmuTZ6dbOCukQ23S92rY2kI81MPBxjUYCSWjV4VzQQiHdFlPAaHNYCAR0pmcZ2IxmDuL2pvd3MWg2xOO5INw/0qVysi7mnAfnboi0xmNFoabA76K3lGWOlk1C2MiJfH0nTob54MsP51nQKbISPRsI9kTDfz69uvyeawd5of6Ij0Z28m2lG8zj7aR+gKrR9lNdPlZ6JOcVIWOk1BXuEZvy7dGxoZIbwcIEU0rSMoTxRJz7xYbEyTDujLeeFfX0+RJRqH4kLlttJ+ZytUHcyFMM5aV8dy9Z5fc6m1vh4sBImflFhvhYng2ikiDjDc4NQRIKlpDE/FrzoCBwpjQDpb4k+GVaABzTVwKDCyh7vBu2zWwQ0h/bNzhtQ9V6oiIPlsPaR8ixkBweyPLJ4XshkZSow6GXFJLZQan/liqarpTCEEJANe4ShvBMVy5wax0iuFKApQNGHBgCZfSyJskjqWf6trDLDznOk2G9e6GCqaUyWXtFb7gb19nwPHctExqXY52G8gUDg8MWd5qBY4hKo5Q4t9DLwfDY0IxtvXuhQgdhJGeMDc95A5O3pm8khkfKxqJZJJiFLQ4ZPMhxXy9DkmPLXYl+AqZ0k21BEBKXUTE3VTWBgRPa2mbOn3TIcKG2eepbmWB/JdKVTVJBk8bjDSobUUg5E4JqOJRXye1EKIIgEDGb1ZxdDPZGG1ubgwW4wX0ggqs1+M4qKaihiKlXowDppjD2GJNXiyQrVcSHYCNobeuz9ARqxXjoJ/QHM+XmaBqvJcppD+7/s8qX9W4QS4Z5B9j6BrXfVeb2sa6VThvV+H2lT+cuytVDKVel9r2rLjY8tcpSf6r8+9ZEyTC0O/+e+B2e4Ry6ZqsbTu8ZeP4VinUv0n6q+XfrNgYfvvG7m+9uvvSrUOqp0n3hu4yT7QzC06JmhiStnlgQ3rOMmP379zKkhfPUTv4LJv292nz4arspG90xNPfR+A8zsPfpLEzdTOHhV2/JN73xbFVw0cbzvxfwed8t2sOtvjiX86HvHvzTpY++fOsE/m38s0rPs+LHPudzji1b8+OlmUoSTT08cd5/d/cXJ+tu3Tpy5ZfUu9rb4rb9tGOSXi3f8RfEOKeiTLnRyh3DHn46ZI/GuVafvby/s//in+z44M7jiwL87/7UlNzK8pwmJLU9cs6LmVGiqZ03fC7vBP4TTO4Yhs2r6M/hw9dqVD13+3HLx2V8u23evg9n9h7BiG7th69Jt757uvim/bKvg/fVn9/bDUdSy/uwHmx3PD4cXT1rOm1e9lBWMxX/+KiH/99Fvjn4xvHH7Dfq2k+8su/S68Vjx845zM9f88/CThy7tfUH6+fsXf1+6qdrhOHeu2jE0tXPH2Uscjv8BZV1LvA==

View File

@@ -1 +1 @@
eNptVX1sE2UYHwMTBFRCAP9Q4Kya6dzb3bXXdt1YYOsGm6PrtnZjQ7Be7972br2v3Uc/BhjcFAXGx4EkSvwD99GOOcYWFj4FEghiAE1MJDhQZoSEqCAqJigxwbddJ1vg/uj1fZ7n/T1fv+e5tmQEKioniVP6OVGDCkVr6KAabUkFtuhQ1d5NCFBjJaa7xuP1dekKN5LLapqsFubnUzJnlmQoUpyZloT8CJFPs5SWj/7LPEzDdAckJj6yY61JgKpKhaBqKsTeWGuiJeRK1NDB5IM8jwkQo7BmKYxeAUnXsACkFNWUh5kUiYcpK12Fimn9GiQRJAbyKVFI1oDVbAOargSklK2IpAR6q5oCKQEdghSvQiTQoCCjxJBhCgs3O9YnWUgxKO1rWbO7WUnVjIHJqRygaBoifCjSEsOJIWN/qJWT8zAGBnlKg30ofhGmC2X0hSGUAcVzEZgYu2UMUrLMczSV0uc3q5LYn8kXaHEZPqruS2UHUHVEzRj2oCBKKvNr4qjmIkaYyQIzPhgDqkZxIo+KCHgKxZOQ0/rjExUyRYcRCMj000iMXR6YaCOpRo+boj3eSZCUQrNGD6UIdvLgRLmiixonQCPpqnnUXUb50J3VTBBmx9AkYDUu0kZPuhGHJ12GmhIHtIQwjE/xBC1JYQ4aI3/6/XTQHxCKq/2BBm5VoMJfWoeTDS7GAhuhF/eGI25fZXOpfWUTUxcrEPlIU3UJIBwWJ+Gw2exOQJhxM2EmAF7CN1aYWywVKxvL6BjL+jRLrduPO8h6rsEFNY1plFY1qERlgx6qIOLmKBFcUc+W15XE9Rp9Ob+8vNxTWtmimEsFV9RKlrfavQJZL4WKMBSdHuGY4lX0Knt5XdQmttjibp3x2kvCimyJyYS/0s6WUNW19AqIE1GHtWxieFabBeCZCO04WYCnnoFxbvBQDGms0UWQZK8CVRlND2xPoJJputrWjXgIL36ZzIxRp6fqIYXnd5chThonfKyehxEOzENrmAW3kBhBFlothTYcW+729bsybnyPpeCQT6FENYhoWD5O+STN6mIYMn2ux5L9RIrsqJOp8NGUAhiTJRWCTFRGfyOoG9sfoLLs4NhkAUkJUSLXmnZrnEizPtoaizK0zjBsJCrgzlbSygWgTgeHM1dkRUq5QQEBQTW6SCs5kNGM864P5YoDAgc4cSwG0JhDnhM4VM/0b2aJqUa3DRX7yKMGGto6aN0lyXQ38JMTLRQoIMKmfD+EIZ1O5+ePNxqHsiITp8N+bLKVCidGQ1gE9cijBhmITlztj41bA44xRl5CB78tYLMU4EwAWi023A6DpA1xK2CF0BZwWgnoPIpWH0cjlFQzZUnRgApptLG1uDGSJ1Cx1I4pthI2qx1lWoRxIs3rDPTqgTIplYNahMkK5CWKOeBaBlwUzULgTfPPSJahSXNXug41golEAh557GuRFCVV5ILBhBcqqDFGH81LOoOWpQITCKuupMkYLmBIAvkNEiiFgiBkQClaQ+No/9OuO7VpkxSPYo/QxkHWWmwqJEmrqQgTqOICO2pT+pvyTiKVqxg6O2Xtoi3Ts9LPVL52qPoMPvf7u/Oeyc8Dzdk9nTNzS2dN3zRlWumG3Mvxre4Fp368l+3rfbBt58ZDH5gX/rbu5vG70ey9qwe+2NtxKXD+/L2cy44793YlO0avn17zw3u3Ni9smWNeJAqtna1b24ISe2OlzvT2L97cNZhwl4HvjiS6buUteOr9ZAjU1m07aczb/df27RvX77vecb+dfC7yetXX2GfE4pysrNHbI8ws59v7iFmtpyo2XTy21HZtdXaNsa79la9uNWqzL13dubHtLZZecuXlG3OffiLY++KZnF+m80/+8/ybp6uOLO0KL5u9pWrw/tSra/HlH7avufDB3T2699Vvr0X/bt71jbe2fE/LOen3ozO33SY7gzPcC+f3+S4MzNlwObLlvqe4vjz3tY9nAJzc+PMfV6qFy6MveArrKoZHfxrpPXxO2NPF3oyHquDw0EfS7nYmeeWB415T/69Ha/5VKwtzzlbNz0uqz87oqNdu3xn2DuxchKr84MHUrP07pi35JDsr6z8PST+u
eNptVQtsU+cVDgLRVlPaTBQQQms8066Q5rfv9fUzWZSmdpI6iZM0NiGhm9Lf//3te+P7yn04thmtgE6jQCXuKgErj6rE2DRNwrOUhoQ92CrYoy+1SEm39N2qXdVpQ5Xo6Ep/O85IBFfy495z/u9855zvnLs1n8SqxsvSohFe0rEKkU5uNHNrXsUDBtb0J3Mi1jmZzXZ2hCNDhspPVXG6rmg1djtUeJusYAnyNiSL9iRtRxzU7eS/IuAiTDYqs+kpYZNVxJoG41iz1jy6yYpkEknSrTXWCBYEi4gt0NIvJ8hPVDZ0SxRDVbNWW1VZwMTH0LBq3fzzaqsos1ggD+KKDhibC+iGGpWJn6arGIrWmhgUNLw5z2HIkpRmyiqynKzp5thCmscgQpggYAnJLC/FzdF4hleqLSyOCVDHw4SchItFMIcTGCsACnwS52ZPmcehogg8ggW7vV+TpZFSMkBPK/hm83CBPSCZS7p5uoOQaAjaO9OknpKFtrlpG308BTQd8pJACgQESPjklKL93HyDAlGCgIBSr8zc7OGx+T6yZh4JQdQRXgAJVcSZR6Aqup2n5j9XDUnnRWzm/Z03hysZb4RjbLTD5j2xAFhLS8g8Uiz6ywsOY11NAyQTDPN5KodkOcFjc3rRbX19KNYXFeuaMt0u/2BHu7Yh7WlsNxLQL6EmIZUJtjUqjXFbWKCNJNVi9FJcCNAeJ+3weL2MC9A2ykZyBkGjTac7m/tl0RH1BjemI2If1YnCzaFWpokRwrwt2Rps07yhxlhXl+EKYKWnRersiEn+9oZWFGn1C20q9vcEbL2phrTLkXxkfcLVNoD62nimoV8NRlUYb4O+zMZOd6C31kIoG0merUs12WwupTXp0CQUHkDKQxyfoR/ObGQTjXExHIjIA93hABfu7fI+Mo8zRbkBVaLtppxeqnCNzSlGwFJc58whmvIeVbGmkHnB23KkkLqhbc0SdeK/XsyXBudwR+sNYa/IBohSzckIZ1RbKLclBFWLg3K4LLS7hmFqXC5Lcygy4i+FidxSmCciKpS0GBFn49wg5BFnSAnMDvtvOQKThREg/S3QJ6MJcEqRNQxKrMyRHtA1uzFAMHBqdt6ArMahxGeKYc3J4iwMZlKDLDJYlksOipQv42T4KDZQ7HTpiKLKhTCEEBA1c8jh842VLHNqHCa5UoCmAEWPp4BKSiHwIk/qWfwurS3NzJLyU2dvdtDJpiELLu8sdoM6P99DxSKRcSH2DRinz+ebuLXTHBRDXHwe1/hCLw3PZ0M7RO3szQ4liMOUNpKa8wY8a07dS276fJjC0EVHfYybdUVdvhhiPZQbuukoZijswK+Q3ccjglJopiKrOtAwIjtaT5tT1SJMFTZPHUO7GDfJtNbCS0gwWBw2ogG5kINWa1FULMiQPYZiAEHEYTCrPzMf6G1vCAX9Z3rAfCGBDmX2/ZCXZE3iY7FcGKukMeYwEmSDJStUxTl/E+hq6DVP+2jEOGkcY1inj4l6WfAQWU5zaP+XXbawf/NQINyTyDzFMXXWGqeTsdZaRFjndZM2Fd8iW3KFXKX4nxaNV+68vax4LSaf69d3dYUSq+mKyX8d2/eFcNu7q5d91Mq3b9nzxuUjtidPXnzry5aXutdkp8Z/fPW3K0frL+fKPz58YdOVQzO/mLi77A/9J5Y8v4t9Z6nnfF91d1/lXy69bH+xo/Lcu5V3TTx+deK7787uF+pH/rj2Ae7KD5Y/F/HsmH5/N/hm8ai15dUvaybPHXztqxU71ZllgNd7Ly+5Z6/nCj044P/oku6uryrv/WWw+oMXyspSn7934M3Et2v2UKsO7l4R/nX5jk9euf3BgLrqh4779vdkVgxtiXy8anPlJvFQw6PlQoV77RrB+megL0ruCd11f7Mwc6Guauq+JduOP3PHvrVM7dF1W39/9av/bNtu7GWlNwZf+9G//+c7MfSTwHR2zROvJv4pOtYHfnPxM/2pddsP/WNlWf21S/qGv+2oeKH8Z7T4ZvwCnz6+7IJtqZo88Oxj0c+XTvz0qbG/Vz1dPdq68pnlLVXbMp13JnYfPKM33L2y7vrqT5+Y+dX5UXkmWl/RAh97e/32o6PjUmrtzn32r186eWXXtd9Z4bfi8m6B/rSOG8Gf3em59y0tKrw4/d+laPuZ+uzjr1/70F5s0OIyZnDa3Uy69T3LDll/

View File

@@ -1 +1 @@
eNptVX1sE2UYHyAIxOAiqAn+sUvBRHDX3vWuXbs5ktExQDa6jwIrhpS3773XXntfu4+uHRKyOUgIDDgJMVFAga7FOccmhI8pKhINGBMSwI8ZIcAfEgNBEREVyXzbdbIF7o/23ud53t/z9Xue68gmkKYLijyhV5ANpAFo4INudWQ11GIi3ejMSMiIKly63t8UOGBqwtD8qGGoernDAVTBrqhIBoIdKpIjQTtgFBgO/K6KKA+TDitcaujNdTYJ6TqIIN1WTry2zgYV7Eo28MFWL8A4AQgNyJwiEbIphZFGgLCSQARlKyVsmiKinJ2pI822fg2WSAqHxJwoohokY3eRhqmFlZytjKU0/tcNDQEJH3gg6ggLDCSpODVsmMOi7NT6bBQBDid+uag4HVV0w+obn8whACHC+EiGCifIEevDSJuglhIc4kVgoB6cgYzypbJ64gipJBCFBMqM3LL6gaqKAgQ5vSOmK3JvIWPSSKnoUXVPLjsS10c2rCN+HETVUkd9ClddJmg767FT/UlSN4Agi7iMpAhwPBk1r/94rEIFMI5ByEJHrczI5b6xNopuddcB6G8aBwk0GLW6gSa52cNj5ZopG4KErKyv/lF3BeVDd4ydpu1lA+OA9ZQMre58I46Nu4wMLUVCBWNY+6gMVJS4gKyh30MhyIfCUiW1pKkVJj1qLAAba0RzOR1uWVRW1rzCVa27Fy9pDIb0gLZECIeEGkjSZU4vXeZyuRiStlN22k6T1aY91NKS1LyLG1riVEhcykqp2mauKri43r1SrJFjYSXpERY6gzGxHiV4KAZWxUIxZGeUZrkOtHh8DfWNNauooOxNJsNAavD5eL9YVUHg6MyEwFWyKxmlXgItiYW1bQE/7YxLiWZ7SBJXOX2GEl9dXcsY9jYfsyQQC44Jj8IRUoUI3RTroXJP3yg3RCRHjKh1gGbdBzWkq3h+0BsZXDLD1DvSmIfomzPZwiDt9y97SOHn0tWYk9bJQNQsJegywg8Nwkk5WYJmyxlnOcMSi+sCvb6Cm8BjKTgQwCOo85iGi0Ypn4VRU44jrsf3WLKfzJEddzIXPp5SEiVVRUdkISqrt5lsHNkg5NLqwyOTRSpaBMhCW96tdTLP+ta2ZCsHTY6LJlolytvGMkIYmZA/UriiakrODQ6IlHTrgMdL9xU0o7zrwblSJE2RFD2YJPGYI1GQBFzP/G9hjelW2oWLffxRA9wvhBdels13g/p0rIWGJEzYnO+HMKzX6/3k8UajUAw28Za5B8db6WhsNLRT0o8/alCA2E/pvclRa1LgrKG5+BDyeHBbEcWHUZgHTJhy8U6Ogy7K5QE8T1HMCbz6BIhRcs1UFc0gdQTxzjZS1lCpBJK5HVPJ0C7GjTOtIAQZiiaHmsxwtZLLQa8gVA2JCuAO+WpIH4BRRDbl+Wdlq4PLq+qW+o42k2OJRPrVke9FVlZ0WeD5TBPScGOsHigqJoeXpYYyGKuxKmgd8XAsxfMgzEIXw0LoJhfiNTSK9j/t0rlNmwUijj0BrcNRptJWzrKMrYKQQKXHjduU/6q0Z3K5ypEvJzSUbJlalH8miTto/2mqeOO9f6ff+DkdW+hbdnTG1M7ptU9N3XfG7Nt8aqBrvTV9xXCFNvvH9xK3f1vwWeuv8vP0lI86Otqjns3f9XB/9/8z+ODqpUtiy7vkMceuttWV117mh+7emXjtCnXj6Vl/XH976oXVG+d2LVh7dcfMTReNvZfSd145F6mM1m2c/0Ns2dlv5/oP6vGdtRdKS3Z3WRMrZ90kw1sH1gxv2jb4BTlLnD08j2w7F++89tdmWq2ZPPn4vfMzv3/i9rTpB6aZoYp3trefrrueeilT0T1l/Ybd6VppT/B+e8ndrovny7p9t7pmvL9368a9t+Z8dfl+1+CGF0+VX1g9/+zr+2e0/3Kz+MnsC9uufH6meNUc73ZH565nn3nr6+0zJnZsmfLBTn12l7uiJDXvWv+ei3eDr7Jz24f/7JROWIM1kT03/T+FtJIHE4uKhocnFT0oPrUW4Pf/ANGuMu4=
eNptVXtsU2UUL6AZQWVoFBON8VogGbjb3UfbtcUZt3Udc6zd1sKYaJqv3/3a3vW+dh/tHszgUOKDMa4CYkRA2FqzjDkdkzeKBsWIkaB/OCCQaGKEGaPxFcPLr10nW+D+cXPvPef7nd8553fO7c4kkarxsjRjkJd0pAKo4xfN7M6oqNVAmv5iWkR6XOb66gPB0B5D5ceWxHVd0TwlJUDhbbKCJMDboCyWJOkSGAd6CX5WBJSD6YvIXPvY2k6riDQNxJBm9azutEIZR5J0q8daz8MEAQgVSJwsEpIhRpBKgIicRARlLbaqsoCwl6Eh1dr1XLFVlDkk4A8xRSdZm4PUDTUiYz9NVxEQrZ4oEDRUbNWRqOBMsBWfpmxUVyaOAIfTvGCZ1xeXNd0cmk79fQAhwphIgjLHSzFzb6yDV4oJDkUFoKMBTFhCucKYAwmEFBIIfBKlJ06Zw0BRBB6CrL2kRZOlwXyCpN6uoFvNA9l8SFwNSTf3BTCJ8pqS+nZcY4mgbU7aRg+3kZoOeEnARSMFgPmklZz98FSDAmACg5D5/pnpicNDU31kzeyvAzAQnAYJVBg3+4EqOu0jU7+rhqTzIjIzlfW3hssbb4ZjbTRjc30wDVhrl6DZn2vD/mmHka62k1DGGOa7VBrKcoJH5tkZBeEwjIYjYll7tdrUFvX5pFqfEkzyKS/PNbhal6dcvoYVLdV8OKLKxoqwI+pQmkm61E4zpS4X4yBpG2XDOZOVcci0G8tTjnq6okaCTTybaOWblvsT/homVNXAQjcl1Cng6YDbHWryS1XLUJXidqtQSvoatUgDQI3VjY16yihvCKFgoqatRXY0NDHPlENXfaC2ASQVWuhYVWkv5RhtKYEpG0meK/MnaG9LKJJ0OJetiLN+LuCvavZXca1cReMzgSSyB1rDwfCqyEqmBk7h7KKdJJWn7aTsLip7DU0qRkBSTI+be2jG9Z6KNAXPEFqXxoXUDa27D6sTnTqZyQ/T7kDtTWHP7/NipZpHQ3GjmKCcRB1QCYZiHATt9LCsx+4kqutCg5X5MKHbCvODEB5ELYrFWTU5CBkYN6QE4gYqbzsCR7MjgPubpY+HlURtiqwhMs/KHFxFNk5sEbLGOzIxb6SsxoDEd+TCmkdzs5DqaEtx0OC4eDIlUu4OO8tHkAGj+/JHFFXOhsGESFHDxXExQ3nLpBoHcK4USVMkRR9qI/HsI4EXeVzP3D2/yjSzz4GLfeBWB11OILz0MvZcN6hjUz1UJGIZZ2PfhLG73e4jt3eahGKxi7vUeWi6l4amsqEZUTtwq0MeYjelDbZNepM8Z44txC9hdykHWQcCkYg9iqIMa3c5KQo5aQTcTpaNMgfxNuQhRsk2U5FVndQQxHtbbzfHikXQlt08ZSztYPExainBS1AwOBQ0Il45mwMWuKIiQQbc+zBKQgDjiJzQn5nxNvvL62oqP1pFThUSGVAm/hkZSdYkPhpNB5GKG2MOQEE2OLxCVZSu9JGN5c3mPjcNWTsdKYWcq9TF0TRZgZfTJNr/suvL7t8MEDD3JDRH4myZ1WO3s9alhAjKXE7cptyf5YV0NlcpdmJG96OvzbbkrlkbXg/Xfkbdf+LilcUVOw6D8eubjy0uerWQ8Xq9T289b7zF3/f66nc2dKVqzu19xP9bf+E/qW8uHfwn+uC9Ff273O+u+XpT08qeoZFfLl1fYfvwzROf9ZwtenIk3PvN/tR58a8HetZ9Wlz6ye+7HtKCzXcW9Uh06pN53Qlm58JLT3x1pnnWkgVP3dnMt+ruHRuHk7XsoWVn+MyzD395/NvtR6penPdh4thje+YP97sPvly4Zc6R2KLTV7/3zjZ8L81BF2p7ly/o6TzZ4ztd94v+5B1zV47GHts2tOXy5Ss/7ty8eC9be/F32LVo9M/xT2f84Sto6v38zPpfF1RcePuVy29s+s4TbC4+vWbt3PVf9o3uuTa3c1vy7nsYx+lTP7DRlyxc47/HK4pia+7Z3fT3OTi6KWrZffzkxyPb/5C+2Hroam9XoavgNWLO+FPLnvfIF38inhj++OzGx6WZwcJfg2pRffddsXnDkdGWQODa/ur3Xt3y3SLXOKfd+OnUlZ8LLJYbN2ZZNj9EJK7PtFj+A1XHTc0=

View File

@@ -1 +1 @@
eNrtVVFv3EQQVsUfWa2QkND5zufz3SUmrRRFUVuVEFACUhUia8+es7exd93ddS7X6B4IfeXB/AIgUVJFLfCAeIFKPPLAHwjv/A9m7bsEQlCrPtfSyXczOzPfzH7z3dHZPijNpbj1nAsDikUGf+ivj84UPC5Bm6enOZhUxsd317ePS8Uv3k2NKXTQ6bCCt3XOTdrOmEiilHHRjmTe4WIsT0Yynv52lgKLMf3T8081KGc1AWGqn+zpOs4pph233W13vaUfVqMICuOsi0jGXCTVi+QJL1okhnHGDJw27upHVhQZj5jF2HmkpThfk0JAjbk63wMoHJbxfXimQBfYBnx5qg0zpT46wbzwx+9nOWjNEvhu88EC3FfHn3FWnSMMkkiZZPA8kjgJYRwzLeC/JU+xFZxZdVbu80gq8bPFprWDSIySmbPBDmyn1fHAdX+95lvNMjlxNuqB6urb979fm5f6EERi0urYX+r/cmPMpuIJF9U3F+RG95qCGPNwlunqxKgSTmIcW/VyOy1bpDskm5Ehnuv5pOsHPS/o9cndje0XEYtScKImU/VMSKe2nK1mxtnaj6qLdtq7TQPf79EPSM5ue/1lz3XdVtpzvOUbHC+vYVs/KKQG514zaOz35nlc+WvafIF3ppADf97665DO2UkD6raH7YFPWxRvA/BqQzgouKrvJTQ8BxqIMstadMRMlIYYj+QNsbcxT2hwSEuMyMvM8IIpE4KIC4mEp4EdVovqiGUQlkX4WPMnEGL9JAFFg65t98orTKoQrA4zjgRG92DhjOVEhALywkyvon302nSL03WuS0M4mhrQNPDc5WG377mzFuUC6SoiCJH1ibawcWVwKQ2EjIe4j2qK0Nkog3iBXKokjBBUPYeY67lzjEywfaVyEhqThSVfBBjcceyQgwrjcj6/mE3rapkUid0fTODXYFOpzNzQ9RGgBqZwutcwTKTa04VNqyNZQGgxcbHP6/YWSHqhNlLh7v07ejb7f6lZe5XU4AetuqOyvIOpnUJJvAEny1jOOlY4tHmrRG+mREvDt0p0qUTv7B3ShmthynSKajRY7jEvHvV9fwi9UX8pcgcuG7DI9Yb9pTgajrvMPj7AwOv6cdcbjsZDd4Q4u/2lQc9FHcuZ4GNkqF08jtuwQy/Jjd6Gyhq/ocXgaw1fH9fGbZQZy0W6i2IY4WbiUiNBEBUOEBGXES4aRuxNmGpUZM41/L7zWrXulQhuowl605pN0lc1Nz/Vom9axiwiArpz/6Ot7d2Vla2HW3fukIeyJEwBYYIwrbkVVkPGUpFaa3CHHCb0BOz9EsP0nm4T1AZiUsBTlg3WUXBAmhA5JgqQCoBKSOqlPDDESNJkqGMWWdvk/phMsXYsxXuG7Ak5qf3N0RZ5VGpDNJuikZlrBxcIFADRYPfRFs/ZAc/LHDPExArLP9JZLBHX0F5Z6TRdfy4+mQMJyOEC0wzNaw1qtM7xW+NqnSUgO516dNT+AxWlCfeZ4lajLWHoIoulRxNqb2cx9xBHmiNpAjp2mm2hM3x2XzvVDP9W4IBhtvrM7uxvcr/ytw==
eNrVVk1vG0UY5uPGkV8wWiEhIa+99vojNilSFKq2QGhRTNUqRKvZ3de70+zObGdm47iRD5SekZZf0JIorqKWooK4QCWOHPgD4cBv4Z21nVInbapwwrJl+/2a5/16Zu9OtkEqJvibjxjXIGmg8Y/67u5Ewu0clL53kIKORbh36WJ/L5fs6L1Y60z1ajWasapKmY6rCeVREFPGq4FIa4wPxL4vwtHvkxhoiOHvHX6pQNorEXBd/GysSz87G9Wcar1ab3aerAQBZNq+yAMRMh4Vj6M7LKuQEAYJ1XAwVRc/0ixLWEANxtotJfjhquAcSszF4RZAZtOEbcNDCSrDNOCbA6WpztXdfYwLf/4xSUEpGsH3Vz+dg/v7jXd/M+GVsjGYliKxV5JEDO21Mm9VPPhg/2PEUDzrx3mFOG2yRiVpOI0Wqbd7rttrdsiltf6vp8a4KlnEeHH/pz5LMa0F6eNVGsQwdymeZrmP2VVISndsBHmh7UxWEm2vbwfFUTV2L1i9ZtO1PkT9hUar23AcpxK7dqN7iuLZApyLO5lQYF+e5ow5nZ7zc/3+dSpHxcEU6g/GCptnfwY80nGx1+40flkIsIagscOoc5y964wWh9hZEgkRJfDkhm2sSxgMe1M8cA6uSRqltHjIhR2YMjy9YWOZaSgiu49jCPaVsDgi9S64IQ0dGviNptsMO92u67ZD120N/G7Q8Y/IqXmsSggRL6OJKva1zOHRPIP+KIOTczSZA5s1uU4+oRwP7zjEcXrl2zS5HOuvcaYkNvOvt+7vWrPtsXqWU+1WW22rYjGOM8cD8HB0I2X1di0/Eb6ntMCMwQNO/QRCq2dgVRZ1WGzAYOsuBgqxHAq0Bzs0zRJQXponmmVU6sUgZ1vg6uFya/Ao83Cv5WjRQMjICySUJfFCpmbKAVYQtRkdpVi9RacMsxecJh56q5Neyn1Z1gqoDOIT0lgMPa0TL2dzkTaj4GkG0gtzOUNHR2VZE8Ejs+3o38RVMO5SzwT15rhiDYXcUpkJoAKRgUHpMb7NNKhjjHeUDj2krQy7bzr5IiYM4lONSLHfSIZoyAcsMofnCl6odpgJJNDjTAKagJdn3m3F7iB+3KIIJMJySqBzLdcxljxUXoL0gM719lwZiiH3OKSZHj33bqLWhJtbl7GOBZ4/KhNrON1OvdVwxuN3Xs7iq2exOH5QqmoySWvYQTuTWCNtJwlNac1wstL/I5L/9iyKPw9974evvBbahjHOTeCPghlh6VMJ6z/z+wlKX+q0HpdEbAez++iYml+XZF91JRzgcCBZFpN8mwVC8sUr4kVqfXtr15pOoBdTFSMjtrsubYR+q9nsgOu3lgKn7dA2DZxGp7UUBp1BnZpXE6DdqDfDeqPjDzqOj2Wtt5baroN8mlLOBji3Zn0ZLviGdTzyqJ0OuMJfKNH4tYpf10phH/fQTKi1WbGSABcPeQm7gqiwVIg4D5Dl0GNrSOWU8WcTiL83XuusyzmCW5s6nffMadCzkptZVazzHqPnHj1r48rn6/3N5eX1m+sffURuipxQCQQvTopUai5BTQZCkpKBcHBtytUQTH8JXmxbqkqQMYiOAa3MQBlFxgBHiIgBkYCjAEjmpNyEHU20INMIpc88apVcGZARnh0K/r4mW1wMS/3UtEJu5UoTRUcopHrBcI5AAhAFZh/M4fj4xdI8xQghMXTzr3AGS8AUVJeXa9Osv+JfzID0yO4c0xjFq1PUKJ3hN8KVMkqPbNTK0pVPC1muvW0qmbl2zMBY8yhmPKaupjvzuntY0hSHpmcN7Om2WGN8bb52qPH4+QMD2myO/wFK03fT

View File

@@ -30,7 +30,7 @@ At a high-level, the basic ways to generate examples are:
- User feedback: users (or labelers) leave feedback on interactions with the application and examples are generated based on that feedback (for example, all interactions with positive feedback could be turned into examples).
- LLM feedback: same as user feedback but the process is automated by having models evaluate themselves.
Which approach is best depends on your task. For tasks where a small number core principles need to be understood really well, it can be valuable hand-craft a few really good examples.
Which approach is best depends on your task. For tasks where a small number of core principles need to be understood really well, it can be valuable hand-craft a few really good examples.
For tasks where the space of correct behaviors is broader and more nuanced, it can be useful to generate many examples in a more automated fashion so that there's a higher likelihood of there being some highly relevant examples for any runtime input.
**Single-turn v.s. multi-turn examples**
@@ -39,8 +39,8 @@ Another dimension to think about when generating examples is what the example is
The simplest types of examples just have a user input and an expected model output. These are single-turn examples.
One more complex type if example is where the example is an entire conversation, usually in which a model initially responds incorrectly and a user then tells the model how to correct its answer.
This is called a multi-turn example. Multi-turn examples can be useful for more nuanced tasks where its useful to show common errors and spell out exactly why they're wrong and what should be done instead.
One more complex type of example is where the example is an entire conversation, usually in which a model initially responds incorrectly and a user then tells the model how to correct its answer.
This is called a multi-turn example. Multi-turn examples can be useful for more nuanced tasks where it's useful to show common errors and spell out exactly why they're wrong and what should be done instead.
## 2. Number of examples
@@ -77,7 +77,7 @@ If we insert our examples as messages, where each example is represented as a se
One area where formatting examples as messages can be tricky is when our example outputs have tool calls. This is because different models have different constraints on what types of message sequences are allowed when any tool calls are generated.
- Some models require that any AIMessage with tool calls be immediately followed by ToolMessages for every tool call,
- Some models additionally require that any ToolMessages be immediately followed by an AIMessage before the next HumanMessage,
- Some models require that tools are passed in to the model if there are any tool calls / ToolMessages in the chat history.
- Some models require that tools are passed into the model if there are any tool calls / ToolMessages in the chat history.
These requirements are model-specific and should be checked for the model you are using. If your model requires ToolMessages after tool calls and/or AIMessages after ToolMessages and your examples only include expected tool calls and not the actual tool outputs, you can try adding dummy ToolMessages / AIMessages to the end of each example with generic contents to satisfy the API constraints.
In these cases it's especially worth experimenting with inserting your examples as strings versus messages, as having dummy messages can adversely affect certain models.

View File

@@ -74,6 +74,8 @@ As an example, query decomposition can simply be accomplished using prompting an
These can then be run sequentially or in parallel on a downstream retrieval system.
```python
from typing import List
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage

View File

@@ -91,7 +91,7 @@ For more information, please see:
#### Usage with LCEL
If you compose multiple Runnables using [LangChains Expression Language (LCEL)](/docs/concepts/lcel), the `stream()` and `astream()` methods will, by convention, stream the output of the last step in the chain. This allows the final processed result to be streamed incrementally. **LCEL** tries to optimize streaming latency in pipelines such that the streaming results from the last step are available as soon as possible.
If you compose multiple Runnables using [LangChains Expression Language (LCEL)](/docs/concepts/lcel), the `stream()` and `astream()` methods will, by convention, stream the output of the last step in the chain. This allows the final processed result to be streamed incrementally. **LCEL** tries to optimize streaming latency in pipelines so that the streaming results from the last step are available as soon as possible.
@@ -104,7 +104,7 @@ Use the `astream_events` API to access custom data and intermediate outputs from
While this API is available for use with [LangGraph](/docs/concepts/architecture#langgraph) as well, it is usually not necessary when working with LangGraph, as the `stream` and `astream` methods provide comprehensive streaming capabilities for LangGraph graphs.
:::
For chains constructed using **LCEL**, the `.stream()` method only streams the output of the final step from te chain. This might be sufficient for some applications, but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of the chain alongside the final output. For example, you may want to return sources alongside the final generation when building a chat-over-documents app.
For chains constructed using **LCEL**, the `.stream()` method only streams the output of the final step from the chain. This might be sufficient for some applications, but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of the chain alongside the final output. For example, you may want to return sources alongside the final generation when building a chat-over-documents app.
There are ways to do this [using callbacks](/docs/concepts/callbacks), or by constructing your chain in such a way that it passes intermediate
values to the end with something like chained [`.assign()`](/docs/how_to/passthrough/) calls, but LangChain also includes an
@@ -125,7 +125,7 @@ prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
parser = StrOutputParser()
chain = prompt | model | parser
async for event in chain.astream_events({"topic": "parrot"}, version="v2"):
async for event in chain.astream_events({"topic": "parrot"}):
kind = event["event"]
if kind == "on_chat_model_stream":
print(event, end="|", flush=True)

View File

@@ -16,7 +16,7 @@ This need motivates the concept of structured output, where models can be instru
## Recommended usage
This pseudo-code illustrates the recommended workflow when using structured output.
This pseudocode illustrates the recommended workflow when using structured output.
LangChain provides a method, [`with_structured_output()`](/docs/how_to/structured_output/#the-with_structured_output-method), that automates the process of binding the schema to the [model](/docs/concepts/chat_models/) and parsing the output.
This helper function is available for all model providers that support structured output.
@@ -99,20 +99,10 @@ Here is an example of how to use JSON mode with OpenAI:
```python
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o", model_kwargs={ "response_format": { "type": "json_object" } })
model = ChatOpenAI(model="gpt-4o").with_structured_output(method="json_mode")
ai_msg = model.invoke("Return a JSON object with key 'random_ints' and a value of 10 random ints in [0-99]")
ai_msg.content
'\n{\n "random_ints": [23, 47, 89, 15, 34, 76, 58, 3, 62, 91]\n}'
```
One important point to flag: the model *still* returns a string, which needs to be parsed into a JSON object.
This can, of course, simply use the `json` library or a JSON output parser if you need more advanced functionality.
See this [how-to guide on the JSON output parser](/docs/how_to/output_parser_json) for more details.
```python
import json
json_object = json.loads(ai_msg.content)
{'random_ints': [23, 47, 89, 15, 34, 76, 58, 3, 62, 91]}
ai_msg
{'random_ints': [45, 67, 12, 34, 89, 23, 78, 56, 90, 11]}
```
## Structured output method

View File

@@ -30,7 +30,7 @@ You will sometimes hear the term `function calling`. We use this term interchang
## Recommended usage
This pseudo-code illustrates the recommended workflow for using tool calling.
This pseudocode illustrates the recommended workflow for using tool calling.
Created tools are passed to `.bind_tools()` method as a list.
This model can be called, as usual. If a tool call is made, model's response will contain the tool call arguments.
The tool call arguments can be passed directly to the tool.

View File

@@ -50,11 +50,6 @@ locally to ensure that it looks good and is free of errors.
If you're unable to build it locally that's okay as well, as you will be able to
see a preview of the documentation on the pull request page.
From the **monorepo root**, run the following command to install the dependencies:
```bash
poetry install --with lint,docs --no-root
````
### Building
@@ -158,14 +153,6 @@ the working directory to the `langchain-community` directory:
cd [root]/libs/langchain-community
```
Set up a virtual environment for the package if you haven't done so already.
Install the dependencies for the package.
```bash
poetry install --with lint
```
Then you can run the following commands to lint and format the in-code documentation:
```bash

View File

@@ -73,17 +73,18 @@ In order to contribute an integration, you should follow these steps:
## Co-Marketing
With over 20 million monthly downloads, LangChain has a large audience of developers building LLM applications.
Besides just adding integrations, we also like to show them examples of cool tools or APIs they can use.
With over 20 million monthly downloads, LangChain has a large audience of developers
building LLM applications. Beyond just listing integrations, we aim to highlight
high-quality, educational examples that inspire developers and advance the ecosystem.
While traditionally called "co-marketing", we like to think of this more as "co-education".
For that reason, while we are happy to highlight your integration through our social media channels, we prefer to highlight examples that also serve some educational purpose.
Our main social media channels are Twitter and LinkedIn.
While we occasionally share integrations, we prioritize content that provides
meaningful insights and best practices. Our main social channels are Twitter and
LinkedIn, where we highlight the best examples.
Here are some heuristics for types of content we are excited to promote:
- **Integration announcement:** If you announce the integration with a link to the LangChain documentation page, we are happy to re-tweet/re-share on Twitter/LinkedIn.
- **Educational content:** We highlight good educational content on the weekends - if you write a good blog or make a good YouTube video, we are happy to share there! Note that we prefer content that is NOT framed as "here's how to use integration XYZ", but rather "here's how to do ABC", as we find that is more educational and helpful for developers.
- **Integration announcement:** Announcements of new integrations with a link to the LangChain documentation page.
- **Educational content:** Blogs, YouTube videos and other media showcasing educational content. Note that we prefer content that is NOT framed as "here's how to use integration XYZ", but rather "here's how to do ABC", as we find that is more educational and helpful for developers.
- **End-to-end applications:** End-to-end applications are great resources for developers looking to build. We prefer to highlight applications that are more complex/agentic in nature, and that use [LangGraph](https://github.com/langchain-ai/langgraph) as the orchestration framework. We get particularly excited about anything involving long-term memory, human-in-the-loop interaction patterns, or multi-agent architectures.
- **Research:** We love highlighting novel research! Whether it is research built on top of LangChain or that integrates with it.

View File

@@ -35,7 +35,7 @@ from a template, which can be edited to implement your LangChain components.
- [GitHub](https://github.com) account
- [PyPi](https://pypi.org/) account
### Boostrapping a new Python package with langchain-cli
### Bootstrapping a new Python package with langchain-cli
First, install `langchain-cli` and `poetry`:
@@ -104,7 +104,7 @@ dependency management and packaging, and you're welcome to use any other tools y
- [GitHub](https://github.com) account
- [PyPi](https://pypi.org/) account
### Boostrapping a new Python package with Poetry
### Bootstrapping a new Python package with Poetry
First, install Poetry:

View File

@@ -35,5 +35,5 @@ Please reference our [Review Process](review_process.mdx).
### I think my PR was closed in a way that didn't follow the review process. What should I do?
Tag `@efriis` in the PR comments referencing the portion of the review
Tag `@ccurme` in the PR comments referencing the portion of the review
process that you believe was not followed. We'll take a look!

View File

@@ -127,20 +127,18 @@
},
{
"cell_type": "code",
"execution_count": 11,
"id": "27bd1dfd-8ae2-49d6-b526-97180c81b5f4",
"metadata": {
"tags": []
},
"execution_count": 3,
"id": "5a03086e-2813-4cb1-b12b-d00e7eeba122",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_start', 'run_id': '08da631a-12a0-4f07-baee-fc9a175ad4ba', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {}, 'data': {'input': 'Write me a 1 verse song about goldfish on the moon'}}\n",
"{'event': 'on_chat_model_stream', 'run_id': '08da631a-12a0-4f07-baee-fc9a175ad4ba', 'tags': [], 'metadata': {}, 'name': 'ChatAnthropic', 'data': {'chunk': AIMessageChunk(content='Here', id='run-08da631a-12a0-4f07-baee-fc9a175ad4ba')}}\n",
"{'event': 'on_chat_model_stream', 'run_id': '08da631a-12a0-4f07-baee-fc9a175ad4ba', 'tags': [], 'metadata': {}, 'name': 'ChatAnthropic', 'data': {'chunk': AIMessageChunk(content=\"'s\", id='run-08da631a-12a0-4f07-baee-fc9a175ad4ba')}}\n",
"{'event': 'on_chat_model_stream', 'run_id': '08da631a-12a0-4f07-baee-fc9a175ad4ba', 'tags': [], 'metadata': {}, 'name': 'ChatAnthropic', 'data': {'chunk': AIMessageChunk(content=' a', id='run-08da631a-12a0-4f07-baee-fc9a175ad4ba')}}\n",
"{'event': 'on_chat_model_start', 'data': {'input': 'Write me a 1 verse song about goldfish on the moon'}, 'name': 'ChatAnthropic', 'tags': [], 'run_id': '1d430164-52b1-4d00-8c00-b16460f7737e', 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-haiku-20240307', 'ls_model_type': 'chat', 'ls_temperature': None, 'ls_max_tokens': 1024}, 'parent_ids': []}\n",
"{'event': 'on_chat_model_stream', 'run_id': '1d430164-52b1-4d00-8c00-b16460f7737e', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-haiku-20240307', 'ls_model_type': 'chat', 'ls_temperature': None, 'ls_max_tokens': 1024}, 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={}, id='run-1d430164-52b1-4d00-8c00-b16460f7737e', usage_metadata={'input_tokens': 21, 'output_tokens': 2, 'total_tokens': 23, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})}, 'parent_ids': []}\n",
"{'event': 'on_chat_model_stream', 'run_id': '1d430164-52b1-4d00-8c00-b16460f7737e', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-haiku-20240307', 'ls_model_type': 'chat', 'ls_temperature': None, 'ls_max_tokens': 1024}, 'data': {'chunk': AIMessageChunk(content=\"Here's\", additional_kwargs={}, response_metadata={}, id='run-1d430164-52b1-4d00-8c00-b16460f7737e')}, 'parent_ids': []}\n",
"{'event': 'on_chat_model_stream', 'run_id': '1d430164-52b1-4d00-8c00-b16460f7737e', 'name': 'ChatAnthropic', 'tags': [], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-haiku-20240307', 'ls_model_type': 'chat', 'ls_temperature': None, 'ls_max_tokens': 1024}, 'data': {'chunk': AIMessageChunk(content=' a short one-verse song', additional_kwargs={}, response_metadata={}, id='run-1d430164-52b1-4d00-8c00-b16460f7737e')}, 'parent_ids': []}\n",
"...Truncated\n"
]
}
@@ -152,7 +150,7 @@
"idx = 0\n",
"\n",
"async for event in chat.astream_events(\n",
" \"Write me a 1 verse song about goldfish on the moon\", version=\"v1\"\n",
" \"Write me a 1 verse song about goldfish on the moon\"\n",
"):\n",
" idx += 1\n",
" if idx >= 5: # Truncate the output\n",
@@ -178,7 +176,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -21,7 +21,7 @@
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [LangChain Expression Language (LCEL)](/docs/concepts/lcel)\n",
"- [The Runnable interface](/docs/concepts/runnables/)\n",
"- [Chaining runnables](/docs/how_to/sequence/)\n",
"- [Binding runtime arguments](/docs/how_to/binding/)\n",
"\n",
@@ -62,6 +62,163 @@
" os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
"cell_type": "markdown",
"id": "9d25f63f-a048-42f3-ac2f-e20ba99cff16",
"metadata": {},
"source": [
"### Configuring fields on a chat model\n",
"\n",
"If using [init_chat_model](/docs/how_to/chat_models_universal_init/) to create a chat model, you can specify configurable fields in the constructor:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "92ba5e49-b2b4-432b-b8bc-b03de46dc2bb",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chat_models import init_chat_model\n",
"\n",
"llm = init_chat_model(\n",
" \"openai:gpt-4o-mini\",\n",
" # highlight-next-line\n",
" configurable_fields=(\"temperature\",),\n",
")"
]
},
{
"cell_type": "markdown",
"id": "61ef4976-9943-492b-9554-0d10e3d3ba76",
"metadata": {},
"source": [
"You can then set the parameter at runtime using `.with_config`:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "277e3232-9b77-4828-8082-b62f4d97127f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello! How can I assist you today?\n"
]
}
],
"source": [
"response = llm.with_config({\"temperature\": 0}).invoke(\"Hello\")\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "44c5fe89-f0a6-4ff0-b419-b927e51cc9fa",
"metadata": {},
"source": [
":::tip\n",
"\n",
"In addition to invocation parameters like temperature, configuring fields this way extends to clients and other attributes.\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "fed7e600-4d5e-4875-8d37-082ec926e66f",
"metadata": {},
"source": [
"#### Use with tools\n",
"\n",
"This method is applicable when [binding tools](/docs/concepts/tool_calling/) as well:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "61a67769-4a15-49e2-a945-1f4e7ef19d8c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'get_weather',\n",
" 'args': {'location': 'San Francisco'},\n",
" 'id': 'call_B93EttzlGyYUhzbIIiMcl3bE',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def get_weather(location: str):\n",
" \"\"\"Get the weather.\"\"\"\n",
" return \"It's sunny.\"\n",
"\n",
"\n",
"llm_with_tools = llm.bind_tools([get_weather])\n",
"response = llm_with_tools.with_config({\"temperature\": 0}).invoke(\n",
" \"What's the weather in SF?\"\n",
")\n",
"response.tool_calls"
]
},
{
"cell_type": "markdown",
"id": "b71c7bf5-f351-4b90-ae86-1100d2dcdfaa",
"metadata": {},
"source": [
"In addition to `.with_config`, we can now include the parameter when passing a configuration directly. See example below, where we allow the underlying model temperature to be configurable inside of a [langgraph agent](/docs/tutorials/agents/):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9bb36a46-7b67-4f11-b043-771f3005f493",
"metadata": {},
"outputs": [],
"source": [
"! pip install --upgrade langgraph"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "093d1c7d-1a64-4e6a-849f-075526b9b3ca",
"metadata": {},
"outputs": [],
"source": [
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"agent = create_react_agent(llm, [get_weather])\n",
"\n",
"response = agent.invoke(\n",
" {\"messages\": \"What's the weather in Boston?\"},\n",
" {\"configurable\": {\"temperature\": 0}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "9dc5be03-528f-4532-8cb0-1f149dddedc9",
"metadata": {},
"source": [
"### Configuring fields on arbitrary Runnables\n",
"\n",
"You can also use the `.configurable_fields` method on arbitrary [Runnables](/docs/concepts/runnables/), as shown below:"
]
},
{
"cell_type": "code",
"execution_count": 2,
@@ -604,7 +761,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -22,7 +22,7 @@
"2. LangChain [Runnables](/docs/concepts/runnables);\n",
"3. By sub-classing from [BaseTool](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.base.BaseTool.html) -- This is the most flexible method, it provides the largest degree of control, at the expense of more effort and code.\n",
"\n",
"Creating tools from functions may be sufficient for most use cases, and can be done via a simple [@tool decorator](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.tool.html#langchain_core.tools.tool). If more configuration is needed-- e.g., specification of both sync and async implementations-- one can also use the [StructuredTool.from_function](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.structured.StructuredTool.html#langchain_core.tools.structured.StructuredTool.from_function) class method.\n",
"Creating tools from functions may be sufficient for most use cases, and can be done via a simple [@tool decorator](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.convert.tool.html). If more configuration is needed-- e.g., specification of both sync and async implementations-- one can also use the [StructuredTool.from_function](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.structured.StructuredTool.html#langchain_core.tools.structured.StructuredTool.from_function) class method.\n",
"\n",
"In this guide we provide an overview of these methods.\n",
"\n",
@@ -492,7 +492,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"id": "1dad8f8e",
"metadata": {
"execution": {
@@ -504,13 +504,14 @@
},
"outputs": [],
"source": [
"from typing import Optional, Type\n",
"from typing import Optional\n",
"\n",
"from langchain_core.callbacks import (\n",
" AsyncCallbackManagerForToolRun,\n",
" CallbackManagerForToolRun,\n",
")\n",
"from langchain_core.tools import BaseTool\n",
"from langchain_core.tools.base import ArgsSchema\n",
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
@@ -524,7 +525,7 @@
"class CustomCalculatorTool(BaseTool):\n",
" name: str = \"Calculator\"\n",
" description: str = \"useful for when you need to answer questions about math\"\n",
" args_schema: Type[BaseModel] = CalculatorInput\n",
" args_schema: Optional[ArgsSchema] = CalculatorInput\n",
" return_direct: bool = True\n",
"\n",
" def _run(\n",

View File

@@ -551,7 +551,7 @@
"\n",
"While a parser encapsulates the logic needed to parse binary data into documents, *blob loaders* encapsulate the logic that's necessary to load blobs from a given storage location.\n",
"\n",
"A the moment, `LangChain` only supports `FileSystemBlobLoader`.\n",
"At the moment, `LangChain` only supports `FileSystemBlobLoader`.\n",
"\n",
"You can use the `FileSystemBlobLoader` to load blobs and then use the parser to parse them."
]

View File

@@ -11,7 +11,7 @@
"\n",
"This covers how to load `HTML` documents into a LangChain [Document](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html#langchain_core.documents.base.Document) objects that we can use downstream.\n",
"\n",
"Parsing HTML files often requires specialized tools. Here we demonstrate parsing via [Unstructured](https://unstructured-io.github.io/unstructured/) and [BeautifulSoup4](https://beautiful-soup-4.readthedocs.io/en/latest/), which can be installed via pip. Head over to the integrations page to find integrations with additional services, such as [Azure AI Document Intelligence](/docs/integrations/document_loaders/azure_document_intelligence) or [FireCrawl](/docs/integrations/document_loaders/firecrawl).\n",
"Parsing HTML files often requires specialized tools. Here we demonstrate parsing via [Unstructured](https://docs.unstructured.io) and [BeautifulSoup4](https://beautiful-soup-4.readthedocs.io/en/latest/), which can be installed via pip. Head over to the integrations page to find integrations with additional services, such as [Azure AI Document Intelligence](/docs/integrations/document_loaders/azure_document_intelligence) or [FireCrawl](/docs/integrations/document_loaders/firecrawl).\n",
"\n",
"## Loading HTML with Unstructured"
]

View File

@@ -68,7 +68,7 @@
"\n",
"### Formatting prompts\n",
"\n",
"Some providers have [chat model](/docs/concepts/chat_models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/text_llms) wrapper, you may need to use a prompt tailed for your specific model.\n",
"Some providers have [chat model](/docs/concepts/chat_models) wrappers that takes care of formatting your input prompt for the specific local model you're using. However, if you are prompting local models with a [text-in/text-out LLM](/docs/concepts/text_llms) wrapper, you may need to use a prompt tailored for your specific model.\n",
"\n",
"This can [require the inclusion of special tokens](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). [Here's an example for LLaMA 2](https://smith.langchain.com/hub/rlm/rag-prompt-llama).\n",
"\n",

View File

@@ -329,7 +329,7 @@
"id": "fc6059fd-0df7-4b6f-a84c-b5874e983638",
"metadata": {},
"source": [
"We can also pass in an arbitrary function or a runnable. This function/runnable should take in a the graph state and output a list of messages.\n",
"We can also pass in an arbitrary function or a runnable. This function/runnable should take in a graph state and output a list of messages.\n",
"We can do all types of arbitrary formatting of messages here. In this case, let's add a SystemMessage to the start of the list of messages and append another user message at the end."
]
},

View File

@@ -599,7 +599,7 @@
"\n",
"The `HTMLSemanticPreservingSplitter` is designed to split HTML content into manageable chunks while preserving the semantic structure of important elements like tables, lists, and other HTML components. This ensures that such elements are not split across chunks, causing loss of contextual relevancy such as table headers, list headers etc.\n",
"\n",
"This splitter is designed at its heart, to create contextually relevant chunks. General Recursive splitting with `HTMLHeaderTextSplitter` can cause tables, lists and other structered elements to be split in the middle, losing signifcant context and creating bad chunks.\n",
"This splitter is designed at its heart, to create contextually relevant chunks. General Recursive splitting with `HTMLHeaderTextSplitter` can cause tables, lists and other structured elements to be split in the middle, losing significant context and creating bad chunks.\n",
"\n",
"The `HTMLSemanticPreservingSplitter` is essential for splitting HTML content that includes structured elements like tables and lists, especially when it's critical to preserve these elements intact. Additionally, its ability to define custom handlers for specific HTML tags makes it a versatile tool for processing complex HTML documents.\n",
"\n",
@@ -832,7 +832,7 @@
"#### Explanation\n",
"In this example, we defined a custom handler for `iframe` tags that converts them into Markdown-like links. When the splitter processes the HTML content, it uses this custom handler to transform the `iframe` tags while preserving other elements like tables and lists. The resulting `Document` objects show how the iframe is handled according to the custom logic you provided.\n",
"\n",
"**Important**: When presvering items such as links, you should be mindful not to include `.` in your seperators, or leave seperators blank. `RecursiveCharacterTextSplitter` splits on full stop, which will cut links in half. Ensure you provide a seperator list with `. ` instead."
"**Important**: When presvering items such as links, you should be mindful not to include `.` in your separators, or leave separators blank. `RecursiveCharacterTextSplitter` splits on full stop, which will cut links in half. Ensure you provide a separator list with `. ` instead."
]
},
{

View File

@@ -512,44 +512,6 @@
"db.run(query)"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "fcdd8432-07a4-4609-8214-b1591dd94950",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"SELECT DISTINCT Genre.Name\n",
"FROM Genre\n",
"JOIN Track ON Genre.GenreId = Track.GenreId\n",
"JOIN Album ON Track.AlbumId = Album.AlbumId\n",
"JOIN Artist ON Album.ArtistId = Artist.ArtistId\n",
"WHERE Artist.Name = 'Elenis Moriset'\n"
]
},
{
"data": {
"text/plain": [
"''"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Without retrieval\n",
"query = query_chain.invoke(\n",
" {\"question\": \"What are all the genres of elenis moriset songs\", \"proper_nouns\": \"\"}\n",
")\n",
"print(query)\n",
"db.run(query)"
]
},
{
"cell_type": "code",
"execution_count": 14,

View File

@@ -720,22 +720,13 @@
},
{
"cell_type": "code",
"execution_count": 14,
"id": "c00df46e-7f6b-4e06-8abf-801898c8d57f",
"execution_count": 13,
"id": "bab5f910-fee0-4a94-9f05-b469006333b8",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: This API is in beta and may change in the future.\n",
" warn_beta(\n"
]
}
],
"outputs": [],
"source": [
"events = []\n",
"async for event in model.astream_events(\"hello\", version=\"v2\"):\n",
"async for event in model.astream_events(\"hello\"):\n",
" events.append(event)"
]
},
@@ -746,15 +737,7 @@
"source": [
":::note\n",
"\n",
"Hey what's that funny version=\"v2\" parameter in the API?! 😾\n",
"\n",
"This is a **beta API**, and we're almost certainly going to make some changes to it (in fact, we already have!)\n",
"\n",
"This version parameter will allow us to minimize such breaking changes to your code. \n",
"\n",
"In short, we are annoying you now, so we don't have to annoy you later.\n",
"\n",
"`v2` is only available for langchain-core>=0.2.0.\n",
"For `langchain-core<0.3.37`, set the `version` kwarg explicitly (e.g., `model.astream_events(\"hello\", version=\"v2\")`).\n",
"\n",
":::"
]
@@ -769,8 +752,8 @@
},
{
"cell_type": "code",
"execution_count": 15,
"id": "ce31b525-f47d-4828-85a7-912ce9f2e79b",
"execution_count": 14,
"id": "c4a2f5dc-2c75-4be4-a8ca-b5b84a3cdbef",
"metadata": {},
"outputs": [
{
@@ -780,23 +763,38 @@
" 'data': {'input': 'hello'},\n",
" 'name': 'ChatAnthropic',\n",
" 'tags': [],\n",
" 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3',\n",
" 'metadata': {}},\n",
" 'run_id': 'b18d016d-8b9b-49e7-a555-44db498fcf66',\n",
" 'metadata': {'ls_provider': 'anthropic',\n",
" 'ls_model_name': 'claude-3-sonnet-20240229',\n",
" 'ls_model_type': 'chat',\n",
" 'ls_temperature': 0.0,\n",
" 'ls_max_tokens': 1024},\n",
" 'parent_ids': []},\n",
" {'event': 'on_chat_model_stream',\n",
" 'data': {'chunk': AIMessageChunk(content='Hello', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')},\n",
" 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3',\n",
" 'run_id': 'b18d016d-8b9b-49e7-a555-44db498fcf66',\n",
" 'name': 'ChatAnthropic',\n",
" 'tags': [],\n",
" 'metadata': {}},\n",
" 'metadata': {'ls_provider': 'anthropic',\n",
" 'ls_model_name': 'claude-3-sonnet-20240229',\n",
" 'ls_model_type': 'chat',\n",
" 'ls_temperature': 0.0,\n",
" 'ls_max_tokens': 1024},\n",
" 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={}, id='run-b18d016d-8b9b-49e7-a555-44db498fcf66', usage_metadata={'input_tokens': 8, 'output_tokens': 4, 'total_tokens': 12, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})},\n",
" 'parent_ids': []},\n",
" {'event': 'on_chat_model_stream',\n",
" 'data': {'chunk': AIMessageChunk(content='!', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')},\n",
" 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3',\n",
" 'run_id': 'b18d016d-8b9b-49e7-a555-44db498fcf66',\n",
" 'name': 'ChatAnthropic',\n",
" 'tags': [],\n",
" 'metadata': {}}]"
" 'metadata': {'ls_provider': 'anthropic',\n",
" 'ls_model_name': 'claude-3-sonnet-20240229',\n",
" 'ls_model_type': 'chat',\n",
" 'ls_temperature': 0.0,\n",
" 'ls_max_tokens': 1024},\n",
" 'data': {'chunk': AIMessageChunk(content='Hello! How can', additional_kwargs={}, response_metadata={}, id='run-b18d016d-8b9b-49e7-a555-44db498fcf66')},\n",
" 'parent_ids': []}]"
]
},
"execution_count": 15,
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
@@ -807,7 +805,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 15,
"id": "76cfe826-ee63-4310-ad48-55a95eb3b9d6",
"metadata": {},
"outputs": [
@@ -815,20 +813,30 @@
"data": {
"text/plain": [
"[{'event': 'on_chat_model_stream',\n",
" 'data': {'chunk': AIMessageChunk(content='?', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')},\n",
" 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3',\n",
" 'run_id': 'b18d016d-8b9b-49e7-a555-44db498fcf66',\n",
" 'name': 'ChatAnthropic',\n",
" 'tags': [],\n",
" 'metadata': {}},\n",
" 'metadata': {'ls_provider': 'anthropic',\n",
" 'ls_model_name': 'claude-3-sonnet-20240229',\n",
" 'ls_model_type': 'chat',\n",
" 'ls_temperature': 0.0,\n",
" 'ls_max_tokens': 1024},\n",
" 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-b18d016d-8b9b-49e7-a555-44db498fcf66', usage_metadata={'input_tokens': 0, 'output_tokens': 12, 'total_tokens': 12, 'input_token_details': {}})},\n",
" 'parent_ids': []},\n",
" {'event': 'on_chat_model_end',\n",
" 'data': {'output': AIMessageChunk(content='Hello! How can I assist you today?', id='run-a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3')},\n",
" 'run_id': 'a81e4c0f-fc36-4d33-93bc-1ac25b9bb2c3',\n",
" 'data': {'output': AIMessageChunk(content='Hello! How can I assist you today?', additional_kwargs={}, response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-b18d016d-8b9b-49e7-a555-44db498fcf66', usage_metadata={'input_tokens': 8, 'output_tokens': 16, 'total_tokens': 24, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})},\n",
" 'run_id': 'b18d016d-8b9b-49e7-a555-44db498fcf66',\n",
" 'name': 'ChatAnthropic',\n",
" 'tags': [],\n",
" 'metadata': {}}]"
" 'metadata': {'ls_provider': 'anthropic',\n",
" 'ls_model_name': 'claude-3-sonnet-20240229',\n",
" 'ls_model_type': 'chat',\n",
" 'ls_temperature': 0.0,\n",
" 'ls_max_tokens': 1024},\n",
" 'parent_ids': []}]"
]
},
"execution_count": 16,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
@@ -849,7 +857,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 16,
"id": "4328c56c-a303-427b-b1f2-f354e9af555c",
"metadata": {},
"outputs": [],
@@ -864,7 +872,6 @@
" \"output a list of the countries france, spain and japan and their populations in JSON format. \"\n",
" 'Use a dict with an outer key of \"countries\" which contains a list of countries. '\n",
" \"Each country should have the key `name` and `population`\",\n",
" version=\"v2\",\n",
" )\n",
"]"
]
@@ -947,29 +954,26 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Chat model chunk: ''\n",
"Chat model chunk: '{'\n",
"Parser chunk: {}\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '\"'\n",
"Chat model chunk: 'countries'\n",
"Chat model chunk: '\":'\n",
"Chat model chunk: ' ['\n",
"Chat model chunk: '\\n \"countries'\n",
"Chat model chunk: '\": [\\n '\n",
"Parser chunk: {'countries': []}\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '{'\n",
"Chat model chunk: '{\\n \"'\n",
"Parser chunk: {'countries': [{}]}\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '\"'\n",
"Chat model chunk: 'name'\n",
"Chat model chunk: '\":'\n",
"Chat model chunk: ' \"'\n",
"Parser chunk: {'countries': [{'name': ''}]}\n",
"Chat model chunk: 'France'\n",
"Chat model chunk: 'name\": \"France'\n",
"Parser chunk: {'countries': [{'name': 'France'}]}\n",
"Chat model chunk: '\",'\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '\"'\n",
"Chat model chunk: 'population'\n",
"Chat model chunk: '\",\\n \"'\n",
"Chat model chunk: 'population\": 67'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67}]}\n",
"Chat model chunk: '413'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413}]}\n",
"Chat model chunk: '000\\n },'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}]}\n",
"Chat model chunk: '\\n {'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}, {}]}\n",
"Chat model chunk: '\\n \"name\":'\n",
"...\n"
]
}
@@ -981,7 +985,6 @@
" \"output a list of the countries france, spain and japan and their populations in JSON format. \"\n",
" 'Use a dict with an outer key of \"countries\" which contains a list of countries. '\n",
" \"Each country should have the key `name` and `population`\",\n",
" version=\"v2\",\n",
"):\n",
" kind = event[\"event\"]\n",
" if kind == \"on_chat_model_stream\":\n",
@@ -1023,24 +1026,24 @@
{
"cell_type": "code",
"execution_count": 20,
"id": "4f0b581b-be63-4663-baba-c6d2b625cdf9",
"id": "42145735-25e8-4e67-b081-b0c15ea45dd1",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_parser_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'my_parser', 'tags': ['seq:step:2'], 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': []}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': ''}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France'}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {'name': ''}]}}, 'run_id': 'e058d750-f2c2-40f6-aa61-10f84cd671a9', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}}\n",
"{'event': 'on_parser_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'my_parser', 'tags': ['seq:step:2'], 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'metadata': {}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': []}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France'}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain'}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"{'event': 'on_parser_stream', 'run_id': '37ee9e85-481c-415e-863b-c9e132d24948', 'name': 'my_parser', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47}]}}, 'parent_ids': ['5a0bc625-09fd-4bdf-9932-54909a9a8c29']}\n",
"...\n"
]
}
@@ -1055,7 +1058,6 @@
" \"output a list of the countries france, spain and japan and their populations in JSON format. \"\n",
" 'Use a dict with an outer key of \"countries\" which contains a list of countries. '\n",
" \"Each country should have the key `name` and `population`\",\n",
" version=\"v2\",\n",
" include_names=[\"my_parser\"],\n",
"):\n",
" print(event)\n",
@@ -1077,24 +1079,24 @@
{
"cell_type": "code",
"execution_count": 21,
"id": "096cd904-72f0-4ebe-a8b7-d0e730faea7f",
"id": "2a7d8fe0-47ca-4ab4-9c10-b34e3f6106ee",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'model', 'tags': ['seq:step:1'], 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\"', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='countries', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\":', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' [', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\\n ', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\"', id='run-db246792-2a91-4eb3-a14b-29658947065d')}, 'run_id': 'db246792-2a91-4eb3-a14b-29658947065d', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {}}\n",
"{'event': 'on_chat_model_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'model', 'tags': ['seq:step:1'], 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c', usage_metadata={'input_tokens': 56, 'output_tokens': 1, 'total_tokens': 57, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\\n \"countries', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\": [\\n ', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{\\n \"', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='name\": \"France', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\",\\n \"', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='population\": 67', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='413', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='000\\n },', additional_kwargs={}, response_metadata={}, id='run-156c3e40-82fb-49ff-8e41-9e998061be8c')}, 'run_id': '156c3e40-82fb-49ff-8e41-9e998061be8c', 'name': 'model', 'tags': ['seq:step:1'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['7b927055-bc1b-4b50-a34c-10d3cfcb3899']}\n",
"...\n"
]
}
@@ -1107,7 +1109,6 @@
"max_events = 0\n",
"async for event in chain.astream_events(\n",
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`',\n",
" version=\"v2\",\n",
" include_types=[\"chat_model\"],\n",
"):\n",
" print(event)\n",
@@ -1136,24 +1137,24 @@
{
"cell_type": "code",
"execution_count": 22,
"id": "26bac0d2-76d9-446e-b346-82790236b88d",
"id": "c237c218-5fd6-4146-ac68-020a038cf582",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chain_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'RunnableSequence', 'tags': ['my_chain'], 'run_id': 'fd68dd64-7a4d-4bdb-a0c2-ee592db0d024', 'metadata': {}}\n",
"{'event': 'on_chat_model_start', 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`')]]}}, 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_parser_start', 'data': {}, 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'run_id': 'afde30b9-beac-4b36-b4c7-dbbe423ddcdb', 'metadata': {}}\n",
"{'event': 'on_parser_stream', 'data': {'chunk': {}}, 'run_id': 'afde30b9-beac-4b36-b4c7-dbbe423ddcdb', 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_chain_stream', 'data': {'chunk': {}}, 'run_id': 'fd68dd64-7a4d-4bdb-a0c2-ee592db0d024', 'name': 'RunnableSequence', 'tags': ['my_chain'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\\n ', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\"', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='countries', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\":', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' [', id='run-efd3c8af-4be5-4f6c-9327-e3f9865dd1cd')}, 'run_id': 'efd3c8af-4be5-4f6c-9327-e3f9865dd1cd', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}}\n",
"{'event': 'on_chain_start', 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}, 'name': 'RunnableSequence', 'tags': ['my_chain'], 'run_id': '58d1302e-36ce-4df7-a3cb-47cb73d57e44', 'metadata': {}, 'parent_ids': []}\n",
"{'event': 'on_chat_model_start', 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`', additional_kwargs={}, response_metadata={})]]}}, 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'run_id': '8222e8a1-d978-4f30-87fc-b2dba838774b', 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={}, id='run-8222e8a1-d978-4f30-87fc-b2dba838774b', usage_metadata={'input_tokens': 56, 'output_tokens': 1, 'total_tokens': 57, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})}, 'run_id': '8222e8a1-d978-4f30-87fc-b2dba838774b', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_parser_start', 'data': {}, 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'run_id': '75604c84-e1e6-494a-8b2a-950f45d932e8', 'metadata': {}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='{', additional_kwargs={}, response_metadata={}, id='run-8222e8a1-d978-4f30-87fc-b2dba838774b')}, 'run_id': '8222e8a1-d978-4f30-87fc-b2dba838774b', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_parser_stream', 'run_id': '75604c84-e1e6-494a-8b2a-950f45d932e8', 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}, 'data': {'chunk': {}}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_chain_stream', 'run_id': '58d1302e-36ce-4df7-a3cb-47cb73d57e44', 'name': 'RunnableSequence', 'tags': ['my_chain'], 'metadata': {}, 'data': {'chunk': {}}, 'parent_ids': []}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\\n \"countries', additional_kwargs={}, response_metadata={}, id='run-8222e8a1-d978-4f30-87fc-b2dba838774b')}, 'run_id': '8222e8a1-d978-4f30-87fc-b2dba838774b', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='\": [\\n ', additional_kwargs={}, response_metadata={}, id='run-8222e8a1-d978-4f30-87fc-b2dba838774b')}, 'run_id': '8222e8a1-d978-4f30-87fc-b2dba838774b', 'name': 'ChatAnthropic', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-sonnet-20240229', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_parser_stream', 'run_id': '75604c84-e1e6-494a-8b2a-950f45d932e8', 'name': 'JsonOutputParser', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}, 'data': {'chunk': {'countries': []}}, 'parent_ids': ['58d1302e-36ce-4df7-a3cb-47cb73d57e44']}\n",
"{'event': 'on_chain_stream', 'run_id': '58d1302e-36ce-4df7-a3cb-47cb73d57e44', 'name': 'RunnableSequence', 'tags': ['my_chain'], 'metadata': {}, 'data': {'chunk': {'countries': []}}, 'parent_ids': []}\n",
"...\n"
]
}
@@ -1164,7 +1165,6 @@
"max_events = 0\n",
"async for event in chain.astream_events(\n",
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`',\n",
" version=\"v2\",\n",
" include_tags=[\"my_chain\"],\n",
"):\n",
" print(event)\n",
@@ -1263,40 +1263,40 @@
{
"cell_type": "code",
"execution_count": 25,
"id": "b08215cd-bffa-4e76-aaf3-c52ee34f152c",
"id": "2c83701e-b801-429f-b2ac-47ed44d2d11a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Chat model chunk: ''\n",
"Chat model chunk: '{'\n",
"Parser chunk: {}\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '\"'\n",
"Chat model chunk: 'countries'\n",
"Chat model chunk: '\":'\n",
"Chat model chunk: ' ['\n",
"Chat model chunk: '\\n \"countries'\n",
"Chat model chunk: '\": [\\n '\n",
"Parser chunk: {'countries': []}\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '{'\n",
"Chat model chunk: '{\\n \"'\n",
"Parser chunk: {'countries': [{}]}\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '\"'\n",
"Chat model chunk: 'name'\n",
"Chat model chunk: '\":'\n",
"Chat model chunk: ' \"'\n",
"Parser chunk: {'countries': [{'name': ''}]}\n",
"Chat model chunk: 'France'\n",
"Chat model chunk: 'name\": \"France'\n",
"Parser chunk: {'countries': [{'name': 'France'}]}\n",
"Chat model chunk: '\",'\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: '\"'\n",
"Chat model chunk: 'population'\n",
"Chat model chunk: '\":'\n",
"Chat model chunk: ' '\n",
"Chat model chunk: '67'\n",
"Chat model chunk: '\",\\n \"'\n",
"Chat model chunk: 'population\": 67'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67}]}\n",
"Chat model chunk: '413'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413}]}\n",
"Chat model chunk: '000\\n },'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}]}\n",
"Chat model chunk: '\\n {'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}, {}]}\n",
"Chat model chunk: '\\n \"name\":'\n",
"Chat model chunk: ' \"Spain\",'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain'}]}\n",
"Chat model chunk: '\\n \"population\":'\n",
"Chat model chunk: ' 47'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47}]}\n",
"Chat model chunk: '351'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': 67413000}, {'name': 'Spain', 'population': 47351}]}\n",
"...\n"
]
}
@@ -1308,7 +1308,6 @@
" \"output a list of the countries france, spain and japan and their populations in JSON format. \"\n",
" 'Use a dict with an outer key of \"countries\" which contains a list of countries. '\n",
" \"Each country should have the key `name` and `population`\",\n",
" version=\"v2\",\n",
"):\n",
" kind = event[\"event\"]\n",
" if kind == \"on_chat_model_stream\":\n",
@@ -1376,7 +1375,7 @@
" return reverse_word.invoke(word)\n",
"\n",
"\n",
"async for event in bad_tool.astream_events(\"hello\", version=\"v2\"):\n",
"async for event in bad_tool.astream_events(\"hello\"):\n",
" print(event)"
]
},
@@ -1412,7 +1411,7 @@
" return reverse_word.invoke(word, {\"callbacks\": callbacks})\n",
"\n",
"\n",
"async for event in correct_tool.astream_events(\"hello\", version=\"v2\"):\n",
"async for event in correct_tool.astream_events(\"hello\"):\n",
" print(event)"
]
},
@@ -1454,7 +1453,7 @@
"\n",
"await reverse_and_double.ainvoke(\"1234\")\n",
"\n",
"async for event in reverse_and_double.astream_events(\"1234\", version=\"v2\"):\n",
"async for event in reverse_and_double.astream_events(\"1234\"):\n",
" print(event)"
]
},
@@ -1495,7 +1494,7 @@
"\n",
"await reverse_and_double.ainvoke(\"1234\")\n",
"\n",
"async for event in reverse_and_double.astream_events(\"1234\", version=\"v2\"):\n",
"async for event in reverse_and_double.astream_events(\"1234\"):\n",
" print(event)"
]
},
@@ -1528,7 +1527,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -87,13 +87,6 @@
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"Failed to batch ingest runs: LangSmithRateLimitError('Rate limit exceeded for https://api.smith.langchain.com/runs/batch. HTTPError(\\'429 Client Error: Too Many Requests for url: https://api.smith.langchain.com/runs/batch\\', \\'{\"detail\":\"Monthly unique traces usage limit exceeded\"}\\')')\n"
]
}
],
"source": [

View File

@@ -131,13 +131,11 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"stream = special_summarization_tool.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\"\n",
")\n",
"stream = special_summarization_tool.astream_events({\"long_text\": LONG_TEXT})\n",
"\n",
"async for event in stream:\n",
" if event[\"event\"] == \"on_chat_model_end\":\n",
@@ -156,7 +154,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
@@ -190,21 +188,19 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_end', 'data': {'output': AIMessage(content='Bee defies physics; Barry chooses outfit for graduation day.', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-d23abc80-0dce-4f74-9d7b-fb98ca4f2a9e', usage_metadata={'input_tokens': 182, 'output_tokens': 16, 'total_tokens': 198}), 'input': {'messages': [[HumanMessage(content=\"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n\\nNARRATOR:\\n(Black screen with text; The sound of buzzing bees can be heard)\\nAccording to all known laws of aviation, there is no way a bee should be able to fly. Its wings are too small to get its fat little body off the ground. The bee, of course, flies anyway because bees don't care what humans think is impossible.\\nBARRY BENSON:\\n(Barry is picking out a shirt)\\nYellow, black. Yellow, black. Yellow, black. Yellow, black. Ooh, black and yellow! Let's shake it up a little.\\nJANET BENSON:\\nBarry! Breakfast is ready!\\nBARRY:\\nComing! Hang on a second.\\n\")]]}}, 'run_id': 'd23abc80-0dce-4f74-9d7b-fb98ca4f2a9e', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['f25c41fe-8972-4893-bc40-cecf3922c1fa']}\n"
"{'event': 'on_chat_model_end', 'data': {'output': AIMessage(content='Bee defies physics; Barry chooses outfit for graduation day.', additional_kwargs={}, response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-337ac14e-8da8-4c6d-a69f-1573f93b651e', usage_metadata={'input_tokens': 182, 'output_tokens': 19, 'total_tokens': 201, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}}), 'input': {'messages': [[HumanMessage(content=\"You are an expert writer. Summarize the following text in 10 words or less:\\n\\n\\nNARRATOR:\\n(Black screen with text; The sound of buzzing bees can be heard)\\nAccording to all known laws of aviation, there is no way a bee should be able to fly. Its wings are too small to get its fat little body off the ground. The bee, of course, flies anyway because bees don't care what humans think is impossible.\\nBARRY BENSON:\\n(Barry is picking out a shirt)\\nYellow, black. Yellow, black. Yellow, black. Yellow, black. Ooh, black and yellow! Let's shake it up a little.\\nJANET BENSON:\\nBarry! Breakfast is ready!\\nBARRY:\\nComing! Hang on a second.\\n\", additional_kwargs={}, response_metadata={})]]}}, 'run_id': '337ac14e-8da8-4c6d-a69f-1573f93b651e', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['225beaa6-af73-4c91-b2d3-1afbbb88d53e']}\n"
]
}
],
"source": [
"stream = special_summarization_tool_with_config.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\"\n",
")\n",
"stream = special_summarization_tool_with_config.astream_events({\"long_text\": LONG_TEXT})\n",
"\n",
"async for event in stream:\n",
" if event[\"event\"] == \"on_chat_model_end\":\n",
@@ -222,33 +218,24 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42', usage_metadata={'input_tokens': 182, 'output_tokens': 0, 'total_tokens': 182})}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Bee', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' def', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='ies physics', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=';', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Barry', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' cho', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='oses outfit', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' for', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' graduation', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' day', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='.', id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42')}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-f24ab147-0b82-4e63-810a-b12bd8d1fb42', usage_metadata={'input_tokens': 0, 'output_tokens': 16, 'total_tokens': 16})}, 'run_id': 'f24ab147-0b82-4e63-810a-b12bd8d1fb42', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['385f3612-417c-4a70-aae0-cce3a5ba6fb6']}\n"
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={}, id='run-f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', usage_metadata={'input_tokens': 182, 'output_tokens': 2, 'total_tokens': 184, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})}, 'run_id': 'f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['51858043-b301-4b76-8abb-56218e405283']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='Bee', additional_kwargs={}, response_metadata={}, id='run-f5e049f7-4e98-4236-87ab-8cd1ce85a2d5')}, 'run_id': 'f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['51858043-b301-4b76-8abb-56218e405283']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' defies physics;', additional_kwargs={}, response_metadata={}, id='run-f5e049f7-4e98-4236-87ab-8cd1ce85a2d5')}, 'run_id': 'f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['51858043-b301-4b76-8abb-56218e405283']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' Barry chooses outfit for', additional_kwargs={}, response_metadata={}, id='run-f5e049f7-4e98-4236-87ab-8cd1ce85a2d5')}, 'run_id': 'f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['51858043-b301-4b76-8abb-56218e405283']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content=' graduation day.', additional_kwargs={}, response_metadata={}, id='run-f5e049f7-4e98-4236-87ab-8cd1ce85a2d5')}, 'run_id': 'f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['51858043-b301-4b76-8abb-56218e405283']}\n",
"{'event': 'on_chat_model_stream', 'data': {'chunk': AIMessageChunk(content='', additional_kwargs={}, response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None}, id='run-f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', usage_metadata={'input_tokens': 0, 'output_tokens': 17, 'total_tokens': 17, 'input_token_details': {}})}, 'run_id': 'f5e049f7-4e98-4236-87ab-8cd1ce85a2d5', 'name': 'ChatAnthropic', 'tags': ['seq:step:2'], 'metadata': {'ls_provider': 'anthropic', 'ls_model_name': 'claude-3-5-sonnet-20240620', 'ls_model_type': 'chat', 'ls_temperature': 0.0, 'ls_max_tokens': 1024}, 'parent_ids': ['51858043-b301-4b76-8abb-56218e405283']}\n"
]
}
],
"source": [
"stream = special_summarization_tool_with_config.astream_events(\n",
" {\"long_text\": LONG_TEXT}, version=\"v2\"\n",
")\n",
"stream = special_summarization_tool_with_config.astream_events({\"long_text\": LONG_TEXT})\n",
"\n",
"async for event in stream:\n",
" if event[\"event\"] == \"on_chat_model_stream\":\n",
@@ -290,7 +277,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -161,7 +161,7 @@
"source": [
"## Adding human approval\n",
"\n",
"Let's add a step in the chain that will ask a person to approve or reject the tall call request.\n",
"Let's add a step in the chain that will ask a person to approve or reject the tool call request.\n",
"\n",
"On rejection, the step will raise an exception which will stop execution of the rest of the chain."
]

View File

@@ -30,7 +30,8 @@
"1. The resulting chat history should be **valid**. Usually this means that the following properties should be satisfied:\n",
" - The chat history **starts** with either (1) a `HumanMessage` or (2) a [SystemMessage](/docs/concepts/messages/#systemmessage) followed by a `HumanMessage`.\n",
" - The chat history **ends** with either a `HumanMessage` or a `ToolMessage`.\n",
" - A `ToolMessage` can only appear after an `AIMessage` that involved a tool call. \n",
" - A `ToolMessage` can only appear after an `AIMessage` that involved a tool call.\n",
"\n",
" This can be achieved by setting `start_on=\"human\"` and `ends_on=(\"human\", \"tool\")`.\n",
"3. It includes recent messages and drops old messages in the chat history.\n",
" This can be achieved by setting `strategy=\"last\"`.\n",

View File

@@ -0,0 +1,206 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Abso\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatAbso\n",
"\n",
"This will help you getting started with ChatAbso [chat models](https://python.langchain.com/docs/concepts/chat_models/). For detailed documentation of all ChatAbso features and configurations head to the [API reference](https://python.langchain.com/api_reference/en/latest/chat_models/langchain_abso.chat_models.ChatAbso.html).\n",
"\n",
"- You can find the full documentation for the Abso router [here] (https://abso.ai)\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/abso) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatAbso](https://python.langchain.com/api_reference/en/latest/chat_models/langchain_abso.chat_models.ChatAbso.html) | [langchain-abso](https://python.langchain.com/api_reference/en/latest/abso_api_reference.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-abso?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-abso?style=flat-square&label=%20) |\n",
"\n",
"## Setup\n",
"To access ChatAbso models you'll need to create an OpenAI account, get an API key, and install the `langchain-abso` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"- TODO: Update with relevant info.\n",
"\n",
"Head to (TODO: link) to sign up to ChatAbso and generate an API key. Once you've done this set the ABSO_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"OPENAI_API_KEY\"):\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"Enter your OpenAI API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain ChatAbso integration lives in the `langchain-abso` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-abso"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_abso import ChatAbso\n",
"\n",
"llm = ChatAbso(fast_model=\"gpt-4o\", slow_model=\"o3-mini\")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatAbso features and configurations head to the API reference: https://python.langchain.com/api_reference/en/latest/chat_models/langchain_abso.chat_models.ChatAbso.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -102,6 +102,16 @@
"%pip install -qU langchain-anthropic"
]
},
{
"cell_type": "markdown",
"id": "fe4993ad-4a9b-4021-8ebd-f0fbbc739f49",
"metadata": {},
"source": [
":::info This guide requires ``langchain-anthropic>=0.3.10``\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
@@ -245,7 +255,7 @@
"source": [
"## Content blocks\n",
"\n",
"One key difference to note between Anthropic models and most others is that the contents of a single Anthropic AI message can either be a single string or a **list of content blocks**. For example when an Anthropic model invokes a tool, the tool invocation is part of the message content (as well as being exposed in the standardized `AIMessage.tool_calls`):"
"Content from a single Anthropic AI message can either be a single string or a **list of content blocks**. For example when an Anthropic model invokes a tool, the tool invocation is part of the message content (as well as being exposed in the standardized `AIMessage.tool_calls`):"
]
},
{
@@ -315,6 +325,430 @@
"ai_msg.tool_calls"
]
},
{
"cell_type": "markdown",
"id": "6e36d25c-f358-49e5-aefa-b99fbd3fec6b",
"metadata": {},
"source": [
"## Extended thinking\n",
"\n",
"Claude 3.7 Sonnet supports an [extended thinking](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking) feature, which will output the step-by-step reasoning process that led to its final answer.\n",
"\n",
"To use it, specify the `thinking` parameter when initializing `ChatAnthropic`. It can also be passed in as a kwarg during invocation.\n",
"\n",
"You will need to specify a token budget to use this feature. See usage example below:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a34cf93b-8522-43a6-a3f3-8a189ddf54a7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[\n",
" {\n",
" \"signature\": \"ErUBCkYIARgCIkCx7bIPj35jGPHpoVOB2y5hvPF8MN4lVK75CYGftmVNlI4axz2+bBbSexofWsN1O/prwNv8yPXnIXQmwT6zrJsKEgwJzvks0yVRZtaGBScaDOm9xcpOxbuhku1zViIw9WDgil/KZL8DsqWrhVpC6TzM0RQNCcsHcmgmyxbgG9g8PR0eJGLxCcGoEw8zMQu1Kh1hQ1/03hZ2JCOgigpByR9aNPTwwpl64fQUe6WwIw==\",\n",
" \"thinking\": \"To find the cube root of 50.653, I need to find the value of $x$ such that $x^3 = 50.653$.\\n\\nI can try to estimate this first. \\n$3^3 = 27$\\n$4^3 = 64$\\n\\nSo the cube root of 50.653 will be somewhere between 3 and 4, but closer to 4.\\n\\nLet me try to compute this more precisely. I can use the cube root function:\\n\\ncube root of 50.653 = 50.653^(1/3)\\n\\nLet me calculate this:\\n50.653^(1/3) \\u2248 3.6998\\n\\nLet me verify:\\n3.6998^3 \\u2248 50.6533\\n\\nThat's very close to 50.653, so I'm confident that the cube root of 50.653 is approximately 3.6998.\\n\\nActually, let me compute this more precisely:\\n50.653^(1/3) \\u2248 3.69981\\n\\nLet me verify once more:\\n3.69981^3 \\u2248 50.652998\\n\\nThat's extremely close to 50.653, so I'll say that the cube root of 50.653 is approximately 3.69981.\",\n",
" \"type\": \"thinking\"\n",
" },\n",
" {\n",
" \"text\": \"The cube root of 50.653 is approximately 3.6998.\\n\\nTo verify: 3.6998\\u00b3 = 50.6530, which is very close to our original number.\",\n",
" \"type\": \"text\"\n",
" }\n",
"]\n"
]
}
],
"source": [
"import json\n",
"\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(\n",
" model=\"claude-3-7-sonnet-latest\",\n",
" max_tokens=5000,\n",
" thinking={\"type\": \"enabled\", \"budget_tokens\": 2000},\n",
")\n",
"\n",
"response = llm.invoke(\"What is the cube root of 50.653?\")\n",
"print(json.dumps(response.content, indent=2))"
]
},
{
"cell_type": "markdown",
"id": "34349dfe-5d81-4887-a4f4-cd01e9587cdc",
"metadata": {},
"source": [
"## Prompt caching\n",
"\n",
"Anthropic supports [caching](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching) of [elements of your prompts](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#what-can-be-cached), including messages, tool definitions, tool results, images and documents. This allows you to re-use large documents, instructions, [few-shot documents](/docs/concepts/few_shot_prompting/), and other data to reduce latency and costs.\n",
"\n",
"To enable caching on an element of a prompt, mark its associated content block using the `cache_control` key. See examples below:\n",
"\n",
"### Messages"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "babb44a5-33f7-4200-9dfc-be867cf2c217",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"First invocation:\n",
"{'cache_read': 0, 'cache_creation': 1458}\n",
"\n",
"Second:\n",
"{'cache_read': 1458, 'cache_creation': 0}\n"
]
}
],
"source": [
"import requests\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\")\n",
"\n",
"# Pull LangChain readme\n",
"get_response = requests.get(\n",
" \"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\"\n",
")\n",
"readme = get_response.text\n",
"\n",
"messages = [\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"You are a technology expert.\",\n",
" },\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": f\"{readme}\",\n",
" # highlight-next-line\n",
" \"cache_control\": {\"type\": \"ephemeral\"},\n",
" },\n",
" ],\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": \"What's LangChain, according to its README?\",\n",
" },\n",
"]\n",
"\n",
"response_1 = llm.invoke(messages)\n",
"response_2 = llm.invoke(messages)\n",
"\n",
"usage_1 = response_1.usage_metadata[\"input_token_details\"]\n",
"usage_2 = response_2.usage_metadata[\"input_token_details\"]\n",
"\n",
"print(f\"First invocation:\\n{usage_1}\")\n",
"print(f\"\\nSecond:\\n{usage_2}\")"
]
},
{
"cell_type": "markdown",
"id": "141ce9c5-012d-4502-9d61-4a413b5d959a",
"metadata": {},
"source": [
"### Tools"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "1de82015-810f-4ed4-a08b-9866ea8746ce",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"First invocation:\n",
"{'cache_read': 0, 'cache_creation': 1809}\n",
"\n",
"Second:\n",
"{'cache_read': 1809, 'cache_creation': 0}\n"
]
}
],
"source": [
"from langchain_anthropic import convert_to_anthropic_tool\n",
"from langchain_core.tools import tool\n",
"\n",
"# For demonstration purposes, we artificially expand the\n",
"# tool description.\n",
"description = (\n",
" f\"Get the weather at a location. By the way, check out this readme: {readme}\"\n",
")\n",
"\n",
"\n",
"@tool(description=description)\n",
"def get_weather(location: str) -> str:\n",
" return \"It's sunny.\"\n",
"\n",
"\n",
"# Enable caching on the tool\n",
"# highlight-start\n",
"weather_tool = convert_to_anthropic_tool(get_weather)\n",
"weather_tool[\"cache_control\"] = {\"type\": \"ephemeral\"}\n",
"# highlight-end\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\")\n",
"llm_with_tools = llm.bind_tools([weather_tool])\n",
"query = \"What's the weather in San Francisco?\"\n",
"\n",
"response_1 = llm_with_tools.invoke(query)\n",
"response_2 = llm_with_tools.invoke(query)\n",
"\n",
"usage_1 = response_1.usage_metadata[\"input_token_details\"]\n",
"usage_2 = response_2.usage_metadata[\"input_token_details\"]\n",
"\n",
"print(f\"First invocation:\\n{usage_1}\")\n",
"print(f\"\\nSecond:\\n{usage_2}\")"
]
},
{
"cell_type": "markdown",
"id": "a763830d-82cb-448a-ab30-f561522791b9",
"metadata": {},
"source": [
"### Incremental caching in conversational applications\n",
"\n",
"Prompt caching can be used in [multi-turn conversations](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#continuing-a-multi-turn-conversation) to maintain context from earlier messages without redundant processing.\n",
"\n",
"We can enable incremental caching by marking the final message with `cache_control`. Claude will automatically use the longest previously-cached prefix for follow-up messages.\n",
"\n",
"Below, we implement a simple chatbot that incorporates this feature. We follow the LangChain [chatbot tutorial](/docs/tutorials/chatbot/), but add a custom [reducer](https://langchain-ai.github.io/langgraph/concepts/low_level/#reducers) that automatically marks the last content block in each user message with `cache_control`. See below:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "07fde4db-344c-49bc-a5b4-99e2d20fb394",
"metadata": {},
"outputs": [],
"source": [
"import requests\n",
"from langchain_anthropic import ChatAnthropic\n",
"from langgraph.checkpoint.memory import MemorySaver\n",
"from langgraph.graph import START, StateGraph, add_messages\n",
"from typing_extensions import Annotated, TypedDict\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\")\n",
"\n",
"# Pull LangChain readme\n",
"get_response = requests.get(\n",
" \"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\"\n",
")\n",
"readme = get_response.text\n",
"\n",
"\n",
"def messages_reducer(left: list, right: list) -> list:\n",
" # Update last user message\n",
" for i in range(len(right) - 1, -1, -1):\n",
" if right[i].type == \"human\":\n",
" right[i].content[-1][\"cache_control\"] = {\"type\": \"ephemeral\"}\n",
" break\n",
"\n",
" return add_messages(left, right)\n",
"\n",
"\n",
"class State(TypedDict):\n",
" messages: Annotated[list, messages_reducer]\n",
"\n",
"\n",
"workflow = StateGraph(state_schema=State)\n",
"\n",
"\n",
"# Define the function that calls the model\n",
"def call_model(state: State):\n",
" response = llm.invoke(state[\"messages\"])\n",
" return {\"messages\": [response]}\n",
"\n",
"\n",
"# Define the (single) node in the graph\n",
"workflow.add_edge(START, \"model\")\n",
"workflow.add_node(\"model\", call_model)\n",
"\n",
"# Add memory\n",
"memory = MemorySaver()\n",
"app = workflow.compile(checkpointer=memory)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "40013035-eb22-4327-8aaf-1ee974d9ff46",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"Hello, Bob! It's nice to meet you. How are you doing today? Is there something I can help you with?\n",
"\n",
"{'cache_read': 0, 'cache_creation': 0}\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"config = {\"configurable\": {\"thread_id\": \"abc123\"}}\n",
"\n",
"query = \"Hi! I'm Bob.\"\n",
"\n",
"input_message = HumanMessage([{\"type\": \"text\", \"text\": query}])\n",
"output = app.invoke({\"messages\": [input_message]}, config)\n",
"output[\"messages\"][-1].pretty_print()\n",
"print(f'\\n{output[\"messages\"][-1].usage_metadata[\"input_token_details\"]}')"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "22371f68-7913-4c4f-ab4a-2b4265095469",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"I can see you've shared the README from the LangChain GitHub repository. This is the documentation for LangChain, which is a popular framework for building applications powered by Large Language Models (LLMs). Here's a summary of what the README contains:\n",
"\n",
"LangChain is:\n",
"- A framework for developing LLM-powered applications\n",
"- Helps chain together components and integrations to simplify AI application development\n",
"- Provides a standard interface for models, embeddings, vector stores, etc.\n",
"\n",
"Key features/benefits:\n",
"- Real-time data augmentation (connect LLMs to diverse data sources)\n",
"- Model interoperability (swap models easily as needed)\n",
"- Large ecosystem of integrations\n",
"\n",
"The LangChain ecosystem includes:\n",
"- LangSmith - For evaluations and observability\n",
"- LangGraph - For building complex agents with customizable architecture\n",
"- LangGraph Platform - For deployment and scaling of agents\n",
"\n",
"The README also mentions installation instructions (`pip install -U langchain`) and links to various resources including tutorials, how-to guides, conceptual guides, and API references.\n",
"\n",
"Is there anything specific about LangChain you'd like to know more about, Bob?\n",
"\n",
"{'cache_read': 0, 'cache_creation': 1498}\n"
]
}
],
"source": [
"query = f\"Check out this readme: {readme}\"\n",
"\n",
"input_message = HumanMessage([{\"type\": \"text\", \"text\": query}])\n",
"output = app.invoke({\"messages\": [input_message]}, config)\n",
"output[\"messages\"][-1].pretty_print()\n",
"print(f'\\n{output[\"messages\"][-1].usage_metadata[\"input_token_details\"]}')"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "0e6798fc-8a80-4324-b4e3-f18706256c61",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"Your name is Bob. You introduced yourself at the beginning of our conversation.\n",
"\n",
"{'cache_read': 1498, 'cache_creation': 269}\n"
]
}
],
"source": [
"query = \"What was my name again?\"\n",
"\n",
"input_message = HumanMessage([{\"type\": \"text\", \"text\": query}])\n",
"output = app.invoke({\"messages\": [input_message]}, config)\n",
"output[\"messages\"][-1].pretty_print()\n",
"print(f'\\n{output[\"messages\"][-1].usage_metadata[\"input_token_details\"]}')"
]
},
{
"cell_type": "markdown",
"id": "aa4b3647-c672-4782-a88c-a55fd3bf969f",
"metadata": {},
"source": [
"In the [LangSmith trace](https://smith.langchain.com/public/4d0584d8-5f9e-4b91-8704-93ba2ccf416a/r), toggling \"raw output\" will show exactly what messages are sent to the chat model, including `cache_control` keys."
]
},
{
"cell_type": "markdown",
"id": "029009f2-2795-418b-b5fc-fb996c6fe99e",
"metadata": {},
"source": [
"## Token-efficient tool use\n",
"\n",
"Anthropic supports a (beta) [token-efficient tool use](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/token-efficient-tool-use) feature. To use it, specify the relevant beta-headers when instantiating the model."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "206cff65-33b8-4a88-9b1a-050b4d57772a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'toolu_01EoeE1qYaePcmNbUvMsWtmA', 'type': 'tool_call'}]\n",
"\n",
"Total tokens: 408\n"
]
}
],
"source": [
"from langchain_anthropic import ChatAnthropic\n",
"from langchain_core.tools import tool\n",
"\n",
"llm = ChatAnthropic(\n",
" model=\"claude-3-7-sonnet-20250219\",\n",
" temperature=0,\n",
" # highlight-start\n",
" model_kwargs={\n",
" \"extra_headers\": {\"anthropic-beta\": \"token-efficient-tools-2025-02-19\"}\n",
" },\n",
" # highlight-end\n",
")\n",
"\n",
"\n",
"@tool\n",
"def get_weather(location: str) -> str:\n",
" \"\"\"Get the weather at a location.\"\"\"\n",
" return \"It's sunny.\"\n",
"\n",
"\n",
"llm_with_tools = llm.bind_tools([get_weather])\n",
"response = llm_with_tools.invoke(\"What's the weather in San Francisco?\")\n",
"print(response.tool_calls)\n",
"print(f'\\nTotal tokens: {response.usage_metadata[\"total_tokens\"]}')"
]
},
{
"cell_type": "markdown",
"id": "301d372f-4dec-43e6-b58c-eee25633e1a6",
@@ -472,6 +906,58 @@
"response.content"
]
},
{
"cell_type": "markdown",
"id": "cbfec7a9-d9df-4d12-844e-d922456dd9bf",
"metadata": {},
"source": [
"## Built-in tools\n",
"\n",
"Anthropic supports a variety of [built-in tools](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/text-editor-tool), which can be bound to the model in the [usual way](/docs/how_to/tool_calling/). Claude will generate tool calls adhering to its internal schema for the tool:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "30a0af36-2327-4b1d-9ba5-e47cb72db0be",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"I'd be happy to help you fix the syntax error in your primes.py file. First, let's look at the current content of the file to identify the error.\n"
]
},
{
"data": {
"text/plain": [
"[{'name': 'str_replace_editor',\n",
" 'args': {'command': 'view', 'path': '/repo/primes.py'},\n",
" 'id': 'toolu_01VdNgt1YV7kGfj9LFLm6HyQ',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"llm = ChatAnthropic(model=\"claude-3-7-sonnet-20250219\")\n",
"\n",
"tool = {\"type\": \"text_editor_20250124\", \"name\": \"str_replace_editor\"}\n",
"llm_with_tools = llm.bind_tools([tool])\n",
"\n",
"response = llm_with_tools.invoke(\n",
" \"There's a syntax error in my primes.py file. Can you help me fix it?\"\n",
")\n",
"print(response.text())\n",
"response.tool_calls"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",

View File

@@ -0,0 +1,275 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: AzureAIChatCompletionsModel\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# AzureAIChatCompletionsModel\n",
"\n",
"This will help you getting started with AzureAIChatCompletionsModel [chat models](/docs/concepts/chat_models). For detailed documentation of all AzureAIChatCompletionsModel features and configurations head to the [API reference](https://python.langchain.com/api_reference/azure_ai/chat_models/langchain_azure_ai.chat_models.AzureAIChatCompletionsModel.html)\n",
"\n",
"The AzureAIChatCompletionsModel class uses the Azure AI Foundry SDK. AI Foundry has several chat models including AzureOpenAI, Cohere, Llama, Phi-3/4, and DeepSeek-R1 to name a few. You can find information about their latest models and their costs, context windows, and supported input types in the [Azure docs](https://learn.microsoft.com/azure/ai-studio/how-to/model-catalog-overview).\n",
"\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://v03.api.js.langchain.com/classes/_langchain_openai.AzureChatOpenAI.html) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [AzureAIChatCompletionsModel](https://python.langchain.com/api_reference/azure_ai/chat_models/langchain_azure_ai.chat_models.AzureAIChatCompletionsModel.html) | [langchain-azure-ai](https://python.langchain.com/api_reference/langchain_azure_ai/index.html) | ❌ | ✅ | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-azure-ai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-azure-ai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅| \n",
"\n",
"## Setup\n",
"\n",
"To access AzureAIChatCompletionsModel models you'll need to create an [Azure account](https://azure.microsoft.com/pricing/purchase-options/azure-account), get an API key, and install the `langchain-azure-ai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"\n",
"Head to the [Azure docs](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/develop/sdk-overview?tabs=sync&pivots=programming-language-python) to see how to create your deployment and generate an API key. Once your model is deployed you click the 'get endpoint' button in AI Foundry. This will show you your endpoint and api key. Once you've done this set the AZURE_INFERENCE_CREDENTIAL and AZURE_INFERENCE_ENDPOINT environment variables:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"AZURE_INFERENCE_CREDENTIAL\"):\n",
" os.environ[\"AZURE_INFERENCE_CREDENTIAL\"] = getpass.getpass(\n",
" \"Enter your AzureAIChatCompletionsModel API key: \"\n",
" )\n",
"\n",
"if not os.getenv(\"AZURE_INFERENCE_ENDPOINT\"):\n",
" os.environ[\"AZURE_INFERENCE_ENDPOINT\"] = getpass.getpass(\n",
" \"Enter your model endpoint: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain AzureAIChatCompletionsModel integration lives in the `langchain-azure-ai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-azure-ai"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel\n",
"\n",
"llm = AzureAIChatCompletionsModel(\n",
" model_name=\"gpt-4\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer.\", additional_kwargs={}, response_metadata={'model': 'gpt-4o-2024-05-13', 'token_usage': {'input_tokens': 31, 'output_tokens': 4, 'total_tokens': 35}, 'finish_reason': 'stop'}, id='run-c082dffd-b1de-4b3f-943f-863836663ddb-0', usage_metadata={'input_tokens': 31, 'output_tokens': 4, 'total_tokens': 35})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={}, response_metadata={'model': 'gpt-4o-2024-05-13', 'token_usage': {'input_tokens': 26, 'output_tokens': 5, 'total_tokens': 31}, 'finish_reason': 'stop'}, id='run-01ba6587-6ff4-4554-8039-13204a7d95db-0', usage_metadata={'input_tokens': 26, 'output_tokens': 5, 'total_tokens': 31})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all AzureAIChatCompletionsModel features and configurations head to the API reference: https://python.langchain.com/api_reference/azure_ai/chat_models/langchain_azure_ai.chat_models.AzureAIChatCompletionsModel.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "langchain-3-9",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.19"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,253 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: ContextualAI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatContextual\n",
"\n",
"This will help you getting started with Contextual AI's Grounded Language Model [chat models](/docs/concepts/chat_models/).\n",
"\n",
"To learn more about Contextual AI, please visit our [documentation](https://docs.contextual.ai/).\n",
"\n",
"This integration requires the `contextual-client` Python SDK. Learn more about it [here](https://github.com/ContextualAI/contextual-client-python).\n",
"\n",
"## Overview\n",
"\n",
"This integration invokes Contextual AI's Grounded Language Model.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatContextual](https://github.com/ContextualAI//langchain-contextual) | [langchain-contextual](https://pypi.org/project/langchain-contextual/) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-contextual?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-contextual?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access Contextual models you'll need to create a Contextual AI account, get an API key, and install the `langchain-contextual` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [app.contextual.ai](https://app.contextual.ai) to sign up to Contextual and generate an API key. Once you've done this set the CONTEXTUAL_AI_API_KEY environment variable:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"CONTEXTUAL_AI_API_KEY\"):\n",
" os.environ[\"CONTEXTUAL_AI_API_KEY\"] = getpass.getpass(\n",
" \"Enter your Contextual API key: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Contextual integration lives in the `langchain-contextual` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-contextual"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions.\n",
"\n",
"The chat client can be instantiated with these following additional settings:\n",
"\n",
"| Parameter | Type | Description | Default |\n",
"|-----------|------|-------------|---------|\n",
"| temperature | Optional[float] | The sampling temperature, which affects the randomness in the response. Note that higher temperature values can reduce groundedness. | 0 |\n",
"| top_p | Optional[float] | A parameter for nucleus sampling, an alternative to temperature which also affects the randomness of the response. Note that higher top_p values can reduce groundedness. | 0.9 |\n",
"| max_new_tokens | Optional[int] | The maximum number of tokens that the model can generate in the response. Minimum is 1 and maximum is 2048. | 1024 |"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_contextual import ChatContextual\n",
"\n",
"llm = ChatContextual(\n",
" model=\"v1\", # defaults to `v1`\n",
" api_key=\"\",\n",
" temperature=0, # defaults to 0\n",
" top_p=0.9, # defaults to 0.9\n",
" max_new_tokens=1024, # defaults to 1024\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation\n",
"\n",
"The Contextual Grounded Language Model accepts additional `kwargs` when calling the `ChatContextual.invoke` method.\n",
"\n",
"These additional inputs are:\n",
"\n",
"| Parameter | Type | Description |\n",
"|-----------|------|-------------|\n",
"| knowledge | list[str] | Required: A list of strings of knowledge sources the grounded language model can use when generating a response. |\n",
"| system_prompt | Optional[str] | Optional: Instructions the model should follow when generating responses. Note that we do not guarantee that the model follows these instructions exactly. |\n",
"| avoid_commentary | Optional[bool] | Optional (Defaults to `False`): Flag to indicate whether the model should avoid providing additional commentary in responses. Commentary is conversational in nature and does not contain verifiable claims; therefore, commentary is not strictly grounded in available context. However, commentary may provide useful context which improves the helpfulness of responses. |"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# include a system prompt (optional)\n",
"system_prompt = \"You are a helpful assistant that uses all of the provided knowledge to answer the user's query to the best of your ability.\"\n",
"\n",
"# provide your own knowledge from your knowledge-base here in an array of string\n",
"knowledge = [\n",
" \"There are 2 types of dogs in the world: good dogs and best dogs.\",\n",
" \"There are 2 types of cats in the world: good cats and best cats.\",\n",
"]\n",
"\n",
"# create your message\n",
"messages = [\n",
" (\"human\", \"What type of cats are there in the world and what are the types?\"),\n",
"]\n",
"\n",
"# invoke the GLM by providing the knowledge strings, optional system prompt\n",
"# if you want to turn off the GLM's commentary, pass True to the `avoid_commentary` argument\n",
"ai_msg = llm.invoke(\n",
" messages, knowledge=knowledge, system_prompt=system_prompt, avoid_commentary=True\n",
")\n",
"\n",
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "2c35a9e0",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can chain the Contextual Model with output parsers."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "545e1e16",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"chain = llm | StrOutputParser\n",
"\n",
"chain.invoke(\n",
" messages, knowledge=knowledge, systemp_prompt=system_prompt, avoid_commentary=True\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatContextual features and configurations head to the Github page: https://github.com/ContextualAI//langchain-contextual"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -38,6 +38,12 @@
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
":::note\n",
"\n",
"DeepSeek-R1, specified via `model=\"deepseek-reasoner\"`, does not support tool calling or structured output. Those features [are supported](https://api-docs.deepseek.com/guides/function_calling) by DeepSeek-V3 (specified via `model=\"deepseek-chat\"`).\n",
"\n",
":::\n",
"\n",
"## Setup\n",
"\n",
"To access DeepSeek models you'll need to create a/an DeepSeek account, get an API key, and install the `langchain-deepseek` integration package.\n",

View File

@@ -85,21 +85,10 @@
},
{
"cell_type": "code",
"execution_count": 3,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"execution_count": null,
"id": "3f3f510e-2afe-4e76-be41-c5a9665aea63",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.0\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.1.2\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"outputs": [],
"source": [
"%pip install -qU langchain-groq"
]
@@ -116,7 +105,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
@@ -124,7 +113,7 @@
"from langchain_groq import ChatGroq\n",
"\n",
"llm = ChatGroq(\n",
" model=\"mixtral-8x7b-32768\",\n",
" model=\"llama-3.1-8b-instant\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
@@ -143,7 +132,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 2,
"id": "62e0dbc3",
"metadata": {
"tags": []
@@ -152,10 +141,10 @@
{
"data": {
"text/plain": [
"AIMessage(content='I enjoy programming. (The French translation is: \"J\\'aime programmer.\")\\n\\nNote: I chose to translate \"I love programming\" as \"J\\'aime programmer\" instead of \"Je suis amoureux de programmer\" because the latter has a romantic connotation that is not present in the original English sentence.', response_metadata={'token_usage': {'completion_tokens': 73, 'prompt_tokens': 31, 'total_tokens': 104, 'completion_time': 0.1140625, 'prompt_time': 0.003352463, 'queue_time': None, 'total_time': 0.117414963}, 'model_name': 'mixtral-8x7b-32768', 'system_fingerprint': 'fp_c5f20b5bb1', 'finish_reason': 'stop', 'logprobs': None}, id='run-64433c19-eadf-42fc-801e-3071e3c40160-0', usage_metadata={'input_tokens': 31, 'output_tokens': 73, 'total_tokens': 104})"
"AIMessage(content='The translation of \"I love programming\" to French is:\\n\\n\"J\\'adore le programmation.\"', additional_kwargs={}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 55, 'total_tokens': 77, 'completion_time': 0.029333333, 'prompt_time': 0.003502892, 'queue_time': 0.553054073, 'total_time': 0.032836225}, 'model_name': 'llama-3.1-8b-instant', 'system_fingerprint': 'fp_a491995411', 'finish_reason': 'stop', 'logprobs': None}, id='run-2b2da04a-993c-40ab-becc-201eab8b1a1b-0', usage_metadata={'input_tokens': 55, 'output_tokens': 22, 'total_tokens': 77})"
]
},
"execution_count": 5,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
@@ -174,7 +163,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 3,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
@@ -182,9 +171,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
"I enjoy programming. (The French translation is: \"J'aime programmer.\")\n",
"The translation of \"I love programming\" to French is:\n",
"\n",
"Note: I chose to translate \"I love programming\" as \"J'aime programmer\" instead of \"Je suis amoureux de programmer\" because the latter has a romantic connotation that is not present in the original English sentence.\n"
"\"J'adore le programmation.\"\n"
]
}
],
@@ -204,17 +193,17 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 4,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='That\\'s great! I can help you translate English phrases related to programming into German.\\n\\n\"I love programming\" can be translated as \"Ich liebe Programmieren\" in German.\\n\\nHere are some more programming-related phrases translated into German:\\n\\n* \"Programming language\" = \"Programmiersprache\"\\n* \"Code\" = \"Code\"\\n* \"Variable\" = \"Variable\"\\n* \"Function\" = \"Funktion\"\\n* \"Array\" = \"Array\"\\n* \"Object-oriented programming\" = \"Objektorientierte Programmierung\"\\n* \"Algorithm\" = \"Algorithmus\"\\n* \"Data structure\" = \"Datenstruktur\"\\n* \"Debugging\" = \"Fehlersuche\"\\n* \"Compile\" = \"Kompilieren\"\\n* \"Link\" = \"Verknüpfen\"\\n* \"Run\" = \"Ausführen\"\\n* \"Test\" = \"Testen\"\\n* \"Deploy\" = \"Bereitstellen\"\\n* \"Version control\" = \"Versionskontrolle\"\\n* \"Open source\" = \"Open Source\"\\n* \"Software development\" = \"Softwareentwicklung\"\\n* \"Agile methodology\" = \"Agile Methodik\"\\n* \"DevOps\" = \"DevOps\"\\n* \"Cloud computing\" = \"Cloud Computing\"\\n\\nI hope this helps! Let me know if you have any other questions or if you need further translations.', response_metadata={'token_usage': {'completion_tokens': 331, 'prompt_tokens': 25, 'total_tokens': 356, 'completion_time': 0.520006542, 'prompt_time': 0.00250165, 'queue_time': None, 'total_time': 0.522508192}, 'model_name': 'mixtral-8x7b-32768', 'system_fingerprint': 'fp_c5f20b5bb1', 'finish_reason': 'stop', 'logprobs': None}, id='run-74207fb7-85d3-417d-b2b9-621116b75d41-0', usage_metadata={'input_tokens': 25, 'output_tokens': 331, 'total_tokens': 356})"
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={}, response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 50, 'total_tokens': 56, 'completion_time': 0.008, 'prompt_time': 0.003337935, 'queue_time': 0.20949214500000002, 'total_time': 0.011337935}, 'model_name': 'llama-3.1-8b-instant', 'system_fingerprint': 'fp_a491995411', 'finish_reason': 'stop', 'logprobs': None}, id='run-e33b48dc-5e55-466e-9ebd-7b48c81c3cbd-0', usage_metadata={'input_tokens': 50, 'output_tokens': 6, 'total_tokens': 56})"
]
},
"execution_count": 7,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -269,7 +258,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -210,7 +210,7 @@
"id": "96ed13d4",
"metadata": {},
"source": [
"Instead of `model_id`, you can also pass the `deployment_id` of the previously tuned model. The entire model tuning workflow is described in [Working with TuneExperiment and PromptTuner](https://ibm.github.io/watsonx-ai-python-sdk/pt_working_with_class_and_prompt_tuner.html)."
"Instead of `model_id`, you can also pass the `deployment_id` of the previously [deployed model with reference to a Prompt Template](https://cloud.ibm.com/apidocs/watsonx-ai#deployments-text-chat)."
]
},
{
@@ -228,6 +228,31 @@
")"
]
},
{
"cell_type": "markdown",
"id": "3d29767c",
"metadata": {},
"source": [
"For certain requirements, there is an option to pass the IBM's [`APIClient`](https://ibm.github.io/watsonx-ai-python-sdk/base.html#apiclient) object into the `ChatWatsonx` class."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0ae9531e",
"metadata": {},
"outputs": [],
"source": [
"from ibm_watsonx_ai import APIClient\n",
"\n",
"api_client = APIClient(...)\n",
"\n",
"chat = ChatWatsonx(\n",
" model_id=\"ibm/granite-34b-code-instruct\",\n",
" watsonx_client=api_client,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "f571001d",
@@ -448,9 +473,7 @@
"source": [
"## Tool calling\n",
"\n",
"### ChatWatsonx.bind_tools()\n",
"\n",
"Please note that `ChatWatsonx.bind_tools` is on beta state, so we recommend using `mistralai/mistral-large` model."
"### ChatWatsonx.bind_tools()"
]
},
{
@@ -563,7 +586,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "langchain_ibm",
"language": "python",
"name": "python3"
},

View File

@@ -322,7 +322,7 @@
"source": [
"### ``strict=True``\n",
"\n",
":::info Requires ``langchain-openai>=0.1.21rc1``\n",
":::info Requires ``langchain-openai>=0.1.21``\n",
"\n",
":::\n",
"\n",
@@ -397,6 +397,405 @@
"For more on binding tools and tool call outputs, head to the [tool calling](/docs/how_to/function_calling) docs."
]
},
{
"cell_type": "markdown",
"id": "84833dd0-17e9-4269-82ed-550639d65751",
"metadata": {},
"source": [
"## Responses API\n",
"\n",
":::info Requires ``langchain-openai>=0.3.9``\n",
"\n",
":::\n",
"\n",
"OpenAI supports a [Responses](https://platform.openai.com/docs/guides/responses-vs-chat-completions) API that is oriented toward building [agentic](/docs/concepts/agents/) applications. It includes a suite of [built-in tools](https://platform.openai.com/docs/guides/tools?api-mode=responses), including web and file search. It also supports management of [conversation state](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses), allowing you to continue a conversational thread without explicitly passing in previous messages.\n",
"\n",
"`ChatOpenAI` will route to the Responses API if one of these features is used. You can also specify `use_responses_api=True` when instantiating `ChatOpenAI`.\n",
"\n",
"### Built-in tools\n",
"\n",
"Equipping `ChatOpenAI` with built-in tools will ground its responses with outside information, such as via context in files or the web. The [AIMessage](/docs/concepts/messages/#aimessage) generated from the model will include information about the built-in tool invocation.\n",
"\n",
"#### Web search\n",
"\n",
"To trigger a web search, pass `{\"type\": \"web_search_preview\"}` to the model as you would another tool.\n",
"\n",
":::tip\n",
"\n",
"You can also pass built-in tools as invocation params:\n",
"```python\n",
"llm.invoke(\"...\", tools=[{\"type\": \"web_search_preview\"}])\n",
"```\n",
"\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "0d8bfe89-948b-42d4-beac-85ef2a72491d",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"\n",
"tool = {\"type\": \"web_search_preview\"}\n",
"llm_with_tools = llm.bind_tools([tool])\n",
"\n",
"response = llm_with_tools.invoke(\"What was a positive news story from today?\")"
]
},
{
"cell_type": "markdown",
"id": "c9fe67c6-38ff-40a5-93b3-a4b7fca76372",
"metadata": {},
"source": [
"Note that the response includes structured [content blocks](/docs/concepts/messages/#content-1) that include both the text of the response and OpenAI [annotations](https://platform.openai.com/docs/guides/tools-web-search?api-mode=responses#output-and-citations) citing its sources:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "3ea5a4b1-f57a-4c8a-97f4-60ab8330a804",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'type': 'text',\n",
" 'text': 'Today, a heartwarming story emerged from Minnesota, where a group of high school robotics students built a custom motorized wheelchair for a 2-year-old boy named Cillian Jackson. Born with a genetic condition that limited his mobility, Cillian\\'s family couldn\\'t afford the $20,000 wheelchair he needed. The students at Farmington High School\\'s Rogue Robotics team took it upon themselves to modify a Power Wheels toy car into a functional motorized wheelchair for Cillian, complete with a joystick, safety bumpers, and a harness. One team member remarked, \"I think we won here more than we do in our competitions. Instead of completing a task, we\\'re helping change someone\\'s life.\" ([boredpanda.com](https://www.boredpanda.com/wholesome-global-positive-news/?utm_source=openai))\\n\\nThis act of kindness highlights the profound impact that community support and innovation can have on individuals facing challenges. ',\n",
" 'annotations': [{'end_index': 778,\n",
" 'start_index': 682,\n",
" 'title': '“Global Positive News”: 40 Posts To Remind Us Theres Good In The World',\n",
" 'type': 'url_citation',\n",
" 'url': 'https://www.boredpanda.com/wholesome-global-positive-news/?utm_source=openai'}]}]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response.content"
]
},
{
"cell_type": "markdown",
"id": "95fbc34c-2f12-4d51-92c5-bf62a2f8900c",
"metadata": {},
"source": [
":::tip\n",
"\n",
"You can recover just the text content of the response as a string by using `response.text()`. For example, to stream response text:\n",
"\n",
"```python\n",
"for token in llm_with_tools.stream(\"...\"):\n",
" print(token.text(), end=\"|\")\n",
"```\n",
"\n",
"See the [streaming guide](/docs/how_to/chat_streaming/) for more detail.\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "2a332940-d409-41ee-ac36-2e9bee900e83",
"metadata": {},
"source": [
"The output message will also contain information from any tool invocations:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "a8011049-6c90-4fcb-82d4-850c72b46941",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'tool_outputs': [{'id': 'ws_67d192aeb6cc81918e736ad4a57937570d6f8507990d9d71',\n",
" 'status': 'completed',\n",
" 'type': 'web_search_call'}]}"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response.additional_kwargs"
]
},
{
"cell_type": "markdown",
"id": "288d47bb-3ccb-412f-a3d3-9f6cee0e6214",
"metadata": {},
"source": [
"#### File search\n",
"\n",
"To trigger a file search, pass a [file search tool](https://platform.openai.com/docs/guides/tools-file-search) to the model as you would another tool. You will need to populate an OpenAI-managed vector store and include the vector store ID in the tool definition. See [OpenAI documentation](https://platform.openai.com/docs/guides/tools-file-search) for more detail."
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "1f758726-33ef-4c04-8a54-49adb783bbb3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Deep Research by OpenAI is a new capability integrated into ChatGPT that allows for the execution of multi-step research tasks independently. It can synthesize extensive amounts of online information and produce comprehensive reports similar to what a research analyst would do, significantly speeding up processes that would typically take hours for a human.\n",
"\n",
"### Key Features:\n",
"- **Independent Research**: Users simply provide a prompt, and the model can find, analyze, and synthesize information from hundreds of online sources.\n",
"- **Multi-Modal Capabilities**: The model is also able to browse user-uploaded files, plot graphs using Python, and embed visualizations in its outputs.\n",
"- **Training**: Deep Research has been trained using reinforcement learning on real-world tasks that require extensive browsing and reasoning.\n",
"\n",
"### Applications:\n",
"- Useful for professionals in sectors like finance, science, policy, and engineering, enabling them to obtain accurate and thorough research quickly.\n",
"- It can also be beneficial for consumers seeking personalized recommendations on complex purchases.\n",
"\n",
"### Limitations:\n",
"Although Deep Research presents significant advancements, it has some limitations, such as the potential to hallucinate facts or struggle with authoritative information. \n",
"\n",
"Deep Research aims to facilitate access to thorough and documented information, marking a significant step toward the broader goal of developing artificial general intelligence (AGI).\n"
]
}
],
"source": [
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"\n",
"openai_vector_store_ids = [\n",
" \"vs_...\", # your IDs here\n",
"]\n",
"\n",
"tool = {\n",
" \"type\": \"file_search\",\n",
" \"vector_store_ids\": openai_vector_store_ids,\n",
"}\n",
"llm_with_tools = llm.bind_tools([tool])\n",
"\n",
"response = llm_with_tools.invoke(\"What is deep research by OpenAI?\")\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "f88bbd71-83b0-45a6-9141-46ec9da93df6",
"metadata": {},
"source": [
"As with [web search](#web-search), the response will include content blocks with citations:"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "865bc14e-1599-438e-be44-857891004979",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'file_id': 'file-3UzgX7jcC8Dt9ZAFzywg5k',\n",
" 'index': 346,\n",
" 'type': 'file_citation',\n",
" 'filename': 'deep_research_blog.pdf'},\n",
" {'file_id': 'file-3UzgX7jcC8Dt9ZAFzywg5k',\n",
" 'index': 575,\n",
" 'type': 'file_citation',\n",
" 'filename': 'deep_research_blog.pdf'}]"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response.content[0][\"annotations\"][:2]"
]
},
{
"cell_type": "markdown",
"id": "dd00f6be-2862-4634-a0c3-14ee39915c90",
"metadata": {},
"source": [
"It will also include information from the built-in tool invocations:"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "e16a7110-d2d8-45fa-b372-5109f330540b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'tool_outputs': [{'id': 'fs_67d196fbb83c8191ba20586175331687089228ce932eceb1',\n",
" 'queries': ['What is deep research by OpenAI?'],\n",
" 'status': 'completed',\n",
" 'type': 'file_search_call'}]}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"response.additional_kwargs"
]
},
{
"cell_type": "markdown",
"id": "6fda05f0-4b81-4709-9407-f316d760ad50",
"metadata": {},
"source": [
"### Managing conversation state\n",
"\n",
"The Responses API supports management of [conversation state](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses).\n",
"\n",
"#### Manually manage state\n",
"\n",
"You can manage the state manually or using [LangGraph](/docs/tutorials/chatbot/), as with other chat models:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "51d3e4d3-ea78-426c-9205-aecb0937fca7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"As of March 12, 2025, here are some positive news stories that highlight recent uplifting events:\n",
"\n",
"*... exemplify positive developments in health, environmental sustainability, and community well-being. \n"
]
}
],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"\n",
"tool = {\"type\": \"web_search_preview\"}\n",
"llm_with_tools = llm.bind_tools([tool])\n",
"\n",
"first_query = \"What was a positive news story from today?\"\n",
"messages = [{\"role\": \"user\", \"content\": first_query}]\n",
"\n",
"response = llm_with_tools.invoke(messages)\n",
"response_text = response.text()\n",
"print(f\"{response_text[:100]}... {response_text[-100:]}\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "5da9d20f-9712-46f4-a395-5be5a7c1bc62",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Your question was: \"What was a positive news story from today?\"\n",
"\n",
"The last sentence of my answer was: \"These stories exemplify positive developments in health, environmental sustainability, and community well-being.\"\n"
]
}
],
"source": [
"second_query = (\n",
" \"Repeat my question back to me, as well as the last sentence of your answer.\"\n",
")\n",
"\n",
"messages.extend(\n",
" [\n",
" response,\n",
" {\"role\": \"user\", \"content\": second_query},\n",
" ]\n",
")\n",
"second_response = llm_with_tools.invoke(messages)\n",
"print(second_response.text())"
]
},
{
"cell_type": "markdown",
"id": "5fd8ca21-8a5e-4294-af32-11f26a040171",
"metadata": {},
"source": [
":::tip\n",
"\n",
"You can use [LangGraph](https://langchain-ai.github.io/langgraph/) to manage conversational threads for you in a variety of backends, including in-memory and Postgres. See [this tutorial](/docs/tutorials/chatbot/) to get started.\n",
"\n",
":::\n",
"\n",
"\n",
"#### Passing `previous_response_id`\n",
"\n",
"When using the Responses API, LangChain messages will include an `\"id\"` field in its metadata. Passing this ID to subsequent invocations will continue the conversation. Note that this is [equivalent](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses#openai-apis-for-conversation-state) to manually passing in messages from a billing perspective."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "009e541a-b372-410e-b9dd-608a8052ce09",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hi Bob! How can I assist you today?\n"
]
}
],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-4o-mini\",\n",
" use_responses_api=True,\n",
")\n",
"response = llm.invoke(\"Hi, I'm Bob.\")\n",
"print(response.text())"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "393a443a-4c5f-4a07-bc0e-c76e529b35e3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Your name is Bob. How can I help you today, Bob?\n"
]
}
],
"source": [
"second_response = llm.invoke(\n",
" \"What is my name?\",\n",
" previous_response_id=response.response_metadata[\"id\"],\n",
")\n",
"print(second_response.text())"
]
},
{
"cell_type": "markdown",
"id": "57e27714",

View File

@@ -69,8 +69,8 @@
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{

View File

@@ -19,7 +19,7 @@
"source": [
"# ChatSambaNovaCloud\n",
"\n",
"This will help you getting started with SambaNovaCloud [chat models](/docs/concepts/chat_models/). For detailed documentation of all ChatSambaNovaCloud features and configurations head to the [API reference](https://python.langchain.com/api_reference/sambanova/chat_models/langchain_sambanova.ChatSambaNovaCloud.html).\n",
"This will help you getting started with SambaNovaCloud [chat models](/docs/concepts/chat_models/). For detailed documentation of all ChatSambaNovaCloud features and configurations head to the [API reference](https://docs.sambanova.ai/cloud/docs/get-started/overview).\n",
"\n",
"**[SambaNova](https://sambanova.ai/)'s** [SambaNova Cloud](https://cloud.sambanova.ai/) is a platform for performing inference with open-source models\n",
"\n",
@@ -28,7 +28,7 @@
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatSambaNovaCloud](https://python.langchain.com/api_reference/sambanova/chat_models/langchain_sambanova.ChatSambaNovaCloud.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_sambanova?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_sambanova?style=flat-square&label=%20) |\n",
"| [ChatSambaNovaCloud](https://docs.sambanova.ai/cloud/docs/get-started/overview) | [langchain-sambanova](https://python.langchain.com/docs/integrations/providers/sambanova/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_sambanova?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_sambanova?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"\n",
@@ -81,8 +81,8 @@
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
@@ -545,7 +545,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatSambaNovaCloud features and configurations head to the API reference: https://python.langchain.com/api_reference/sambanova/chat_models/langchain_sambanova.ChatSambaNovaCloud.html"
"For detailed documentation of all SambaNovaCloud features and configurations head to the API reference: https://docs.sambanova.ai/cloud/docs/get-started/overview"
]
}
],

View File

@@ -19,7 +19,7 @@
"source": [
"# ChatSambaStudio\n",
"\n",
"This will help you getting started with SambaStudio [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatStudio features and configurations head to the [API reference](https://python.langchain.com/api_reference/sambanova/chat_models/langchain_sambanova.chat_models.sambanova.ChatSambaStudio.html).\n",
"This will help you getting started with SambaStudio [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatStudio features and configurations head to the [API reference](https://docs.sambanova.ai/sambastudio/latest/index.html).\n",
"\n",
"**[SambaNova](https://sambanova.ai/)'s** [SambaStudio](https://docs.sambanova.ai/sambastudio/latest/sambastudio-intro.html) SambaStudio is a rich, GUI-based platform that provides the functionality to train, deploy, and manage models in SambaNova [DataScale](https://sambanova.ai/products/datascale) systems.\n",
"\n",
@@ -28,7 +28,7 @@
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatSambaStudio](https://python.langchain.com/api_reference/sambanova/chat_models/langchain_sambanova.chat_models.sambanova.ChatSambaStudio.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_sambanova?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_sambanova?style=flat-square&label=%20) |\n",
"| [ChatSambaStudio](https://docs.sambanova.ai/sambastudio/latest/index.html) | [langchain-sambanova](https://python.langchain.com/docs/integrations/providers/sambanova/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_sambanova?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_sambanova?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"\n",
@@ -84,8 +84,8 @@
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
@@ -483,7 +483,7 @@
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatSambaStudio features and configurations head to the API reference: https://python.langchain.com/api_reference/sambanova/chat_models/langchain_sambanova.sambanova.chat_models.ChatSambaStudio.html"
"For detailed documentation of all SambaStudio features and configurations head to the API reference: https://docs.sambanova.ai/sambastudio/latest/api-ref-landing.html"
]
}
],

View File

@@ -26,22 +26,9 @@
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Install the package\n",
"%pip install --upgrade --quiet dashscope"
@@ -49,8 +36,12 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2025-03-05T01:11:20.457141Z",
"start_time": "2025-03-05T01:11:18.810160Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
@@ -66,8 +57,12 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2025-03-05T01:11:24.270318Z",
"start_time": "2025-03-05T01:11:24.268064Z"
},
"collapsed": false,
"jupyter": {
"outputs_hidden": false
@@ -266,6 +261,52 @@
"ai_message"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Partial Mode\n",
"Enable the large model to continue generating content from the initial text you provide."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"ExecuteTime": {
"end_time": "2025-03-05T01:31:29.155824Z",
"start_time": "2025-03-05T01:31:27.239667Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' has cast off its heavy cloak of snow, donning instead a vibrant garment of fresh greens and floral hues; it is as if the world has woken from a long slumber, stretching and reveling in the warm caress of the sun. Everywhere I look, there is a symphony of life: birdsong fills the air, bees dance from flower to flower, and a gentle breeze carries the sweet fragrance of blossoms. It is in this season that my heart finds particular joy, for it whispers promises of renewal and growth, reminding me that even after the coldest winters, there will always be a spring to follow.', additional_kwargs={}, response_metadata={'model_name': 'qwen-turbo', 'finish_reason': 'stop', 'request_id': '447283e9-ee31-9d82-8734-af572921cb05', 'token_usage': {'input_tokens': 40, 'output_tokens': 127, 'prompt_tokens_details': {'cached_tokens': 0}, 'total_tokens': 167}}, id='run-6a35a91c-cc12-4afe-b56f-fd26d9035357-0')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_community.chat_models.tongyi import ChatTongyi\n",
"from langchain_core.messages import AIMessage, HumanMessage\n",
"\n",
"messages = [\n",
" HumanMessage(\n",
" content=\"\"\"Please continue the sentence \"Spring has arrived, and the earth\" to express the beauty of spring and the author's joy.\"\"\"\n",
" ),\n",
" AIMessage(\n",
" content=\"Spring has arrived, and the earth\", additional_kwargs={\"partial\": True}\n",
" ),\n",
"]\n",
"chatLLM = ChatTongyi()\n",
"ai_message = chatLLM.invoke(messages)\n",
"ai_message"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -1,362 +1,231 @@
{
"cells": [
{
"cell_type": "raw",
"id": "85e07aae70a15572",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Writer\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "cb4dd00a-8893-4a45-96f7-9a9fc341cd61",
"id": "e815de6298bf07ca",
"metadata": {},
"source": [
"# ChatWriter\n",
"# Chat Writer\n",
"\n",
"This notebook provides a quick overview for getting started with Writer [chat models](/docs/concepts/chat_models).\n",
"This notebook provides a quick overview for getting started with Writer [chat](/docs/concepts/chat_models/).\n",
"\n",
"Writer has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Writer docs](https://dev.writer.com/home).\n",
"\n",
"Writer has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Writer docs](https://dev.writer.com/home/models).\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "617a6e98205ab7c8",
"metadata": {},
"source": [
"## Overview\n",
"\n",
"### Integration details\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: |:----------:| :---: | :---: |\n",
"| ChatWriter | langchain-community | ❌ | ❌ | | ❌ | |\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"|:-------------------------------------------------------------------------------------------------------------------------|:-----------------| :---: | :---: |:----------:|:------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------:|\n",
"| [ChatWriter](https://github.com/writer/langchain-writer/blob/main/langchain_writer/chat_models.py#L308) | [langchain-writer](https://pypi.org/project/langchain-writer/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-writer?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-writer?style=flat-square&label=%20) |\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | Structured output | JSON mode | Image input | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | Logprobs |\n",
"| :---: |:-----------------:| :---: | :---: | :---: | :---: | :---: | :---: |:--------------------------------:|:--------:|\n",
"| ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"To access Writer models you'll need to create a Writer account, get an API key, and install the `writer-sdk` and `langchain-community` packages.\n",
"\n",
"| ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |"
]
},
{
"cell_type": "markdown",
"id": "3fd9903e685808d9",
"metadata": {},
"source": [
"### Credentials\n",
"\n",
"Head to [Writer AI Studio](https://app.writer.com/aistudio/signup?utm_campaign=devrel) to sign up to OpenAI and generate an API key. Once you've done this set the WRITER_API_KEY environment variable:"
"Sign up for [Writer AI Studio](https://app.writer.com/aistudio/signup?utm_campaign=devrel) and follow this [Quickstart](https://dev.writer.com/api-guides/quickstart) to obtain an API key. Then, set the WRITER_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "e817fe2e-4f1d-4533-b19e-2400b1cf6ce8",
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:26.800627Z",
"start_time": "2024-11-14T09:27:59.652281Z"
"jupyter": {
"is_executing": true
}
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.environ.get(\"WRITER_API_KEY\"):\n",
" os.environ[\"WRITER_API_KEY\"] = getpass.getpass(\"Enter your Writer API key:\")"
]
"if not os.getenv(\"WRITER_API_KEY\"):\n",
" os.environ[\"WRITER_API_KEY\"] = getpass.getpass(\"Enter your Writer API key: \")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "c59722a9-6dbb-45f7-ae59-5be50ca5733d",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Writer integration lives in the `langchain-community` package:"
"`ChatWriter` is available from the `langchain-writer` package. Install it with:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2113471c-75d7-45df-b784-d78da4ef7aba",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:32.415354Z",
"start_time": "2024-11-14T09:46:26.826112Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\r\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m24.2\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.3.1\u001b[0m\r\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\r\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"source": [
"%pip install -qU langchain-community writer-sdk"
]
"%pip install -qU langchain-writer"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "1098bc9d-ce83-462b-8c19-f85bf3a159dc",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"### Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
"Now we can instantiate our model object in order to generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "522686de",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:33.504711Z",
"start_time": "2024-11-14T09:46:32.574505Z"
},
"tags": []
},
"outputs": [],
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"source": [
"from langchain_community.chat_models.writer import ChatWriter\n",
"from langchain_writer import ChatWriter\n",
"\n",
"llm = ChatWriter(\n",
" model=\"palmyra-x-004\",\n",
" temperature=0.7,\n",
" max_tokens=1000,\n",
" # other params...\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
")"
]
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "6511982a-734a-4193-a47d-254f8dcaff5e",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
"## Usage\n",
"\n",
"To use the model, you pass in a list of messages and call the `invoke` method:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ce16ad78-8e6f-48cd-954e-98be75eb5836",
"id": "62e0dbc3",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:38.856174Z",
"start_time": "2024-11-14T09:46:33.520062Z"
},
"tags": []
},
"outputs": [],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that writes poems about the Python programming language.\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"Write a poem about Python.\"),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "2cd224b8-4499-41fb-a604-d53a7ff17b2e",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:38.866651Z",
"start_time": "2024-11-14T09:46:38.863817Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"In realms of code, where logic weaves and flows,\n",
"A language rises, Python by its name,\n",
"With syntax clear, where elegance it shows,\n",
"A serpent, wise, that time and space can tame.\n",
"\n",
"Born from the mind of Guido, pure and bright,\n",
"Its beauty lies in simplicity and grace,\n",
"A tool of power, yet gentle in its might,\n",
"In every programmer's heart, a cherished place.\n",
"\n",
"It dances through the data, vast and deep,\n",
"With libraries that span the digital realm,\n",
"From machine learning's secrets to keep,\n",
"To web development, it wields the helm.\n",
"\n",
"In the hands of the novice and the sage,\n",
"Python spins the threads of digital dreams,\n",
"A language that can turn the age,\n",
"With a gentle learning curve, its appeal gleams.\n",
"\n",
"It's more than code, a community it builds,\n",
"Where knowledge freely flows, and all are heard,\n",
"In Python's world, the future unfolds,\n",
"A language of the people, for the world.\n",
"\n",
"So here's to Python, in its gentle might,\n",
"A master of the modern coding art,\n",
"May it continue to light our path each night,\n",
"In the vast, evolving world of code, its heart.\n"
]
}
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
],
"source": [
"print(ai_msg.content)"
]
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "35b3a5b3dabef65",
"id": "5cf7293d",
"metadata": {},
"source": [
"## Streaming"
"Then, you can access the content of the message:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2725770182bf96dc",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:38.914883Z",
"start_time": "2024-11-14T09:46:38.912564Z"
}
},
"outputs": [],
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"source": [
"ai_stream = llm.stream(messages)"
"print(ai_msg.content)"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "4391289ce0a80e19",
"metadata": {},
"source": [
"## Streaming\n",
"\n",
"You can also stream the response. First, create a stream:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "a48410d9488162e3",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:43.226449Z",
"start_time": "2024-11-14T09:46:38.955512Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"In realms of code where logic weaves,\n",
"A language rises, Python, it breezes,\n",
"With syntax clear and simple to read,\n",
"Through its elegance, our spirits are fed.\n",
"\n",
"Like rivers flowing, smooth and serene,\n",
"Its structure harmonious, a coder's dream,\n",
"Indentations guide the flow of control,\n",
"In Python's world, confusion takes no toll.\n",
"\n",
"A vast library, a treasure trove so bright,\n",
"For web and data, it offers its might,\n",
"With modules and packages, a rich array,\n",
"Python empowers us to code in play.\n",
"\n",
"From AI to scripts, in flexibility it thrives,\n",
"A language of the future, as many now derive,\n",
"Its community, a beacon of support and cheer,\n",
"With Python, the possibilities are vast, far and near.\n",
"\n",
"So here's to Python, in its gentle grace,\n",
"A tool that enhances, a language that embraces,\n",
"The art of coding, with a fluent, flowing pen,\n",
"In the Python world, we code, and we begin."
]
}
"id": "4a0f2112b3a4c79e",
"metadata": {},
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming. Sing a song about it\"),\n",
"]\n",
"ai_stream = llm.stream(messages)\n",
"ai_stream"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "23cc74b6",
"metadata": {},
"source": [
"Then, iterate over the stream to get the chunks:"
]
},
{
"cell_type": "code",
"id": "8c4b7b9b9308c757",
"metadata": {},
"source": [
"for chunk in ai_stream:\n",
" print(chunk.content, end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "778f912a-66ea-4a5d-b3de-6c7db4baba26",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "fbb043e6",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:50.721645Z",
"start_time": "2024-11-14T09:46:43.234590Z"
},
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessageChunk(content='In the realm of code, where logic weaves and flows, \\nA language rises, like a phoenix from the code\\'s throes. \\nJava, the name, a cup of coffee\\'s steam, \\nBrewed in the minds, where digital dreams gleam.\\n\\nWith syntax clear, like morning\\'s misty hue, \\nIn classes and objects, it spins a tale so true. \\nA platform agnostic, with a byte to spare, \\nAcross the devices, it journeys everywhere.\\n\\nInheritance and polymorphism, its power\\'s core, \\nLike ancient runes, in every line they bore. \\nEncapsulation, a shield, with data it does hide, \\nIn the vast jungle of code, it stands as a guide.\\n\\nFrom applets small, to vast, server-side apps, \\nIts threads run swift, through the computing traps. \\nA language of the people, by the people, for the peoples use, \\nBuilt on the principle, \"write once, run anywhere, with no excuse.\"\\n\\nIn the heart of Android, it beats, a steady drum, \\nCrafting experiences, in every smartphone\\'s hum. \\nIn the cloud, in the enterprise, its presence is vast, \\nA cornerstone of computing, built to last.\\n\\nOh Java, thy elegance, thy robust design, \\nA language that stands, in any computing line. \\nWith every update, with every new release, \\nThy community grows, with a vibrant, diverse peace.\\n\\nSo here\\'s to Java, the versatile, the grand, \\nA language that shapes the digital land. \\nMay it continue to evolve, to grow, to inspire, \\nIn the endless quest of turning thoughts into digital fire.', additional_kwargs={}, response_metadata={'token_usage': {'completion_tokens': 345, 'prompt_tokens': 33, 'total_tokens': 378, 'completion_tokens_details': None, 'prompt_token_details': None}, 'model_name': 'palmyra-x-004', 'system_fingerprint': 'v1', 'finish_reason': 'stop'}, id='run-a5b4be59-0eb0-41bd-80f7-72477861b0bd-0')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that writes poems about the {input_language} programming language.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"Java\",\n",
" \"input\": \"Write a poem about Java.\",\n",
" }\n",
")"
]
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "0b1b52a5-b58d-40c9-bcdd-88eb8fb351e2",
"id": "e632bf7d0873f933",
"metadata": {},
"source": [
"## Tool calling\n",
"\n",
"Writer supports [tool calling](https://dev.writer.com/api-guides/tool-calling), which lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool.\n",
"Writer models like Palmyra X 004 support [tool calling](https://dev.writer.com/api-guides/tool-calling), which lets you describe tools and their arguments. The model will return a JSON object with a tool to invoke and the inputs to that tool.\n",
"\n",
"### ChatWriter.bind_tools()\n",
"### Binding tools\n",
"\n",
"With `ChatWriter.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to tool schemas, which looks like:\n",
"With `ChatWriter.bind_tools`, you can easily pass in Pydantic classes, dictionary schemas, LangChain tools, or even functions as tools to the model. Under the hood, these are converted to tool schemas, which look like this:\n",
"```\n",
"{\n",
" \"name\": \"...\",\n",
@@ -364,20 +233,15 @@
" \"parameters\": {...} # JSONSchema\n",
"}\n",
"```\n",
"and passed in every model invocation."
"These are passed in every model invocation.\n",
"\n",
"For example, to use a tool that gets the weather in a given location, you can define a Pydantic class and pass it to `ChatWriter.bind_tools`:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "b7ea7690-ec7a-4337-b392-e87d1f39a6ec",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:50.891937Z",
"start_time": "2024-11-14T09:46:50.733463Z"
}
},
"outputs": [],
"id": "47e2f0faceca533",
"metadata": {},
"source": [
"from pydantic import BaseModel, Field\n",
"\n",
@@ -388,86 +252,175 @@
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm_with_tools = llm.bind_tools([GetWeather])"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1d1ab955-6a68-42f8-bb5d-86eb1111478a",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:51.725422Z",
"start_time": "2024-11-14T09:46:50.904699Z"
}
},
"llm.bind_tools([GetWeather])"
],
"outputs": [],
"source": [
"ai_msg = llm_with_tools.invoke(\n",
" \"what is the weather like in New York City\",\n",
")"
]
"execution_count": null
},
{
"cell_type": "markdown",
"id": "768d1ae4-4b1a-48eb-a329-c8d5051067a3",
"id": "68e22d3b",
"metadata": {},
"source": [
"### AIMessage.tool_calls\n",
"Notice that the AIMessage has a `tool_calls` attribute. This contains in a standardized ToolCall format that is model-provider agnostic."
"Then, you can invoke the model with the tool:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "166cb7ce-831d-4a7c-9721-abc107f11084",
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-14T09:46:51.744202Z",
"start_time": "2024-11-14T09:46:51.738431Z"
}
},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetWeather',\n",
" 'args': {'location': 'New York City, NY'},\n",
" 'id': 'chatcmpl-tool-fe70912c800d40fc8700d604d4823001',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
"id": "765527dd533ec967",
"metadata": {},
"source": [
"ai_msg = llm.invoke(\n",
" \"what is the weather like in New York City\",\n",
")\n",
"ai_msg"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "57544bdf",
"metadata": {},
"source": [
"Finally, you can access the tool calls and proceed to execute your functions:"
]
},
{
"cell_type": "code",
"id": "f361c4769e772fe",
"metadata": {},
"source": [
"print(ai_msg.tool_calls)"
]
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "e082c9ac-c7c7-4aff-a8ec-8e220262a59c",
"id": "3baf53021834d2ff",
"metadata": {},
"source": [
"For more on binding tools and tool call outputs, head to the [tool calling](/docs/how_to/function_calling) docs."
"### A note on tool binding\n",
"\n",
"The `ChatWriter.bind_tools()` method does not create new instance with bound tools, but stores the received `tools` and `tool_choice` in the initial class instance attributes to pass them as parameters during the Palmyra LLM call while using `ChatWriter` invocation. This approach allows the support of different tool types, e.g. `function` and `graph`. `Graph` is one of the remotely called Writer Palmyra tools. For further information visit our [docs](https://dev.writer.com/api-guides/knowledge-graph#knowledge-graph). \n",
"\n",
"For more information about tool usage in LangChain, visit the [LangChain tool calling documentation](https://python.langchain.com/docs/concepts/tool_calling/)."
]
},
{
"cell_type": "markdown",
"id": "a796d728-971b-408b-88d5-440015bbb941",
"id": "a4674b1b82ce9d1f",
"metadata": {},
"source": [
"## Batching\n",
"\n",
"You can also batch requests and set the `max_concurrency`:"
]
},
{
"cell_type": "code",
"id": "c8a217f6190747fe",
"metadata": {},
"source": [
"ai_batch = llm.batch(\n",
" [\n",
" \"How to cook pancakes?\",\n",
" \"How to compose poem?\",\n",
" \"How to run faster?\",\n",
" ],\n",
" config={\"max_concurrency\": 3},\n",
")\n",
"ai_batch"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "2eb81e1d",
"metadata": {},
"source": [
"Then, iterate over the batch to get the results:"
]
},
{
"cell_type": "code",
"id": "b6a228d448f3df23",
"metadata": {},
"source": [
"for batch in ai_batch:\n",
" print(batch.content)\n",
" print(\"-\" * 100)"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "58a9ab241fe09a71",
"metadata": {},
"source": [
"## Asynchronous usage\n",
"\n",
"All features above (invocation, streaming, batching, tools calling) also support asynchronous usage."
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Prompt templates\n",
"\n",
"[Prompt templates](https://python.langchain.com/docs/concepts/prompt_templates/) help to translate user input and parameters into instructions for a language model. You can use `ChatWriter` with a prompt templates like so:\n"
]
},
{
"cell_type": "code",
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"For detailed documentation of all ChatWriter features and configurations head to the [API reference](https://python.langchain.com/api_reference/writer/chat_models/langchain_writer.chat_models.ChatWriter.html#langchain_writer.chat_models.ChatWriter).\n",
"\n",
"For detailed documentation of all Writer features, head to our [API reference](https://dev.writer.com/api-guides/api-reference/completion-api/chat-completion)."
"## Additional resources\n",
"You can find information about Writer's models (including costs, context windows, and supported input types) and tools in the [Writer docs](https://dev.writer.com/home)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -481,7 +434,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -0,0 +1,245 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Xinference\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "01ca90da",
"metadata": {},
"source": [
"# ChatXinference\n",
"\n",
"[Xinference](https://github.com/xorbitsai/inference) is a powerful and versatile library designed to serve LLMs, \n",
"speech recognition models, and multimodal models, even on your laptop. It supports a variety of models compatible with GGML, such as chatglm, baichuan, whisper, vicuna, orca, and many others.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support] | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| ChatXinference| langchain-xinference | ✅ | ❌ | ✅ | ✅ | ✅ |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: |:----------------------------------------------------:| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"Install `Xinference` through PyPI:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet \"xinference[all]\""
]
},
{
"cell_type": "markdown",
"id": "7fb27b941602401d91542211134fc71a",
"metadata": {},
"source": [
"### Deploy Xinference Locally or in a Distributed Cluster.\n",
"\n",
"For local deployment, run `xinference`. \n",
"\n",
"To deploy Xinference in a cluster, first start an Xinference supervisor using the `xinference-supervisor`. You can also use the option -p to specify the port and -H to specify the host. The default port is 8080 and the default host is 0.0.0.0.\n",
"\n",
"Then, start the Xinference workers using `xinference-worker` on each server you want to run them on. \n",
"\n",
"You can consult the README file from [Xinference](https://github.com/xorbitsai/inference) for more information.\n",
"### Wrapper\n",
"\n",
"To use Xinference with LangChain, you need to first launch a model. You can use command line interface (CLI) to do so:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "acae54e37e7d407bbb7b55eff062a284",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Model uid: 7167b2b0-2a04-11ee-83f0-d29396a3f064\n"
]
}
],
"source": [
"%xinference launch -n vicuna-v1.3 -f ggmlv3 -q q4_0"
]
},
{
"cell_type": "markdown",
"id": "9a63283cbaf04dbcab1f6479b197f3a8",
"metadata": {},
"source": [
"A model UID is returned for you to use. Now you can use Xinference with LangChain:"
]
},
{
"cell_type": "markdown",
"id": "8dd0d8092fe74a7c96281538738b07e2",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"The LangChain Xinference integration lives in the `langchain-xinference` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "72eea5119410473aa328ad9291626812",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-xinference"
]
},
{
"cell_type": "markdown",
"id": "8edb47106e1a46a883d545849b8ab81b",
"metadata": {},
"source": [
"Make sure you're using the latest Xinference version for structured outputs."
]
},
{
"cell_type": "markdown",
"id": "55935996",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "10185d26023b46108eb7d9f57d49d2b3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_xinference.chat_models import ChatXinference\n",
"\n",
"llm = ChatXinference(\n",
" server_url=\"your_server_url\", model_uid=\"7167b2b0-2a04-11ee-83f0-d29396a3f064\"\n",
")\n",
"\n",
"llm.invoke(\n",
" \"Q: where can we visit in the capital of France?\",\n",
" config={\"max_tokens\": 1024},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "8763a12b2bbd4a93a75aff182afb95dc",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "50f6c0e4",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_xinference.chat_models import ChatXinference\n",
"\n",
"llm = ChatXinference(\n",
" server_url=\"your_server_url\", model_uid=\"7167b2b0-2a04-11ee-83f0-d29396a3f064\"\n",
")\n",
"\n",
"system_message = \"You are a helpful assistant that translates English to French. Translate the user sentence.\"\n",
"human_message = \"I love programming.\"\n",
"\n",
"llm.invoke([HumanMessage(content=human_message), SystemMessage(content=system_message)])"
]
},
{
"cell_type": "markdown",
"id": "7623eae2785240b9bd12b16a66d81610",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "317d05c6",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"from langchain_xinference.chat_models import ChatXinference\n",
"\n",
"prompt = PromptTemplate(\n",
" input=[\"country\"], template=\"Q: where can we visit in the capital of {country}? A:\"\n",
")\n",
"\n",
"llm = ChatXinference(\n",
" server_url=\"your_server_url\", model_uid=\"7167b2b0-2a04-11ee-83f0-d29396a3f064\"\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(input={\"country\": \"France\"})\n",
"chain.stream(input={\"country\": \"France\"})"
]
},
{
"cell_type": "markdown",
"id": "66f7fb26",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatXinference features and configurations head to the API reference: https://github.com/TheSongg/langchain-xinference"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,265 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "wkUAAcGZNSJ3"
},
"source": [
"# AgentQLLoader\n",
"\n",
"[AgentQL](https://www.agentql.com/)'s document loader provides structured data extraction from any web page using an [AgentQL query](https://docs.agentql.com/agentql-query). AgentQL can be used across multiple languages and web pages without breaking over time and change.\n",
"\n",
"## Overview\n",
"\n",
"`AgentQLLoader` requires the following two parameters:\n",
"- `url`: The URL of the web page you want to extract data from.\n",
"- `query`: The AgentQL query to execute. Learn more about [how to write an AgentQL query in the docs](https://docs.agentql.com/agentql-query) or test one out in the [AgentQL Playground](https://dev.agentql.com/playground).\n",
"\n",
"Setting the following parameters are optional:\n",
"- `api_key`: Your AgentQL API key from [dev.agentql.com](https://dev.agentql.com). **`Optional`.**\n",
"- `timeout`: The number of seconds to wait for a request before timing out. **Defaults to `900`.**\n",
"- `is_stealth_mode_enabled`: Whether to enable experimental anti-bot evasion strategies. This feature may not work for all websites at all times. Data extraction may take longer to complete with this mode enabled. **Defaults to `False`.**\n",
"- `wait_for`: The number of seconds to wait for the page to load before extracting data. **Defaults to `0`.**\n",
"- `is_scroll_to_bottom_enabled`: Whether to scroll to bottom of the page before extracting data. **Defaults to `False`.**\n",
"- `mode`: `\"standard\"` uses deep data analysis, while `\"fast\"` trades some depth of analysis for speed and is adequate for most usecases. [Learn more about the modes in this guide.](https://docs.agentql.com/accuracy/standard-mode) **Defaults to `\"fast\"`.**\n",
"- `is_screenshot_enabled`: Whether to take a screenshot before extracting data. Returned in 'metadata' as a Base64 string. **Defaults to `False`.**\n",
"\n",
"AgentQLLoader is implemented with AgentQL's [REST API](https://docs.agentql.com/rest-api/api-reference)\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support |\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| AgentQLLoader| langchain-agentql | ✅ | ❌ | ❌ |\n",
"\n",
"### Loader features\n",
"| Source | Document Lazy Loading | Native Async Support\n",
"| :---: | :---: | :---: |\n",
"| AgentQLLoader | ✅ | ❌ |"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "CaKa2QrnwPXq"
},
"source": [
"## Setup\n",
"\n",
"To use the AgentQL Document Loader, you will need to configure the `AGENTQL_API_KEY` environment variable, or use the `api_key` parameter. You can acquire an API key from our [Dev Portal](https://dev.agentql.com)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mZNJvUQBNSJ5"
},
"source": [
"### Installation\n",
"\n",
"Install **langchain-agentql**."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "IblRoJJDNSJ5"
},
"outputs": [],
"source": [
"%pip install -qU langchain_agentql"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SNsUT60YvfCm"
},
"source": [
"### Set Credentials"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "2D1EN7Egvk1c"
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"AGENTQL_API_KEY\"] = \"YOUR_AGENTQL_API_KEY\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "D4hnJV_6NSJ5"
},
"source": [
"## Initialization\n",
"\n",
"Next instantiate your model object:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "oMJdxL_KNSJ5"
},
"outputs": [],
"source": [
"from langchain_agentql.document_loaders import AgentQLLoader\n",
"\n",
"loader = AgentQLLoader(\n",
" url=\"https://www.agentql.com/blog\",\n",
" query=\"\"\"\n",
" {\n",
" posts[] {\n",
" title\n",
" url\n",
" date\n",
" author\n",
" }\n",
" }\n",
" \"\"\",\n",
" is_scroll_to_bottom_enabled=True,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SRxIOx90NSJ5"
},
"source": [
"## Load"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "bNnnCZ1oNSJ5",
"outputId": "d0eb8cb4-9742-4f0c-80f1-0509a3af1808"
},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'request_id': 'bdb9dbe7-8a7f-427f-bc16-839ccc02cae6', 'generated_query': None, 'screenshot': None}, page_content=\"{'posts': [{'title': 'Launch Week Recap—make the web AI-ready', 'url': 'https://www.agentql.com/blog/2024-launch-week-recap', 'date': 'Nov 18, 2024', 'author': 'Rachel-Lee Nabors'}, {'title': 'Accurate data extraction from PDFs and images with AgentQL', 'url': 'https://www.agentql.com/blog/accurate-data-extraction-pdfs-images', 'date': 'Feb 1, 2025', 'author': 'Rachel-Lee Nabors'}, {'title': 'Introducing Scheduled Scraping Workflows', 'url': 'https://www.agentql.com/blog/scheduling', 'date': 'Dec 2, 2024', 'author': 'Rachel-Lee Nabors'}, {'title': 'Updates to Our Pricing Model', 'url': 'https://www.agentql.com/blog/2024-pricing-update', 'date': 'Nov 19, 2024', 'author': 'Rachel-Lee Nabors'}, {'title': 'Get data from any page: AgentQLs REST API Endpoint—Launch week day 5', 'url': 'https://www.agentql.com/blog/data-rest-api', 'date': 'Nov 15, 2024', 'author': 'Rachel-Lee Nabors'}]}\")"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs = loader.load()\n",
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "wtPMNh72NSJ5",
"outputId": "59d529a4-3c22-445c-f5cf-dc7b24168906"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'request_id': 'bdb9dbe7-8a7f-427f-bc16-839ccc02cae6', 'generated_query': None, 'screenshot': None}\n"
]
}
],
"source": [
"print(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "7RMuEwl4NSJ5"
},
"source": [
"## Lazy Load\n",
"\n",
"`AgentQLLoader` currently only loads one `Document` at a time. Therefore, `load()` and `lazy_load()` behave the same:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "FIYddZBONSJ5",
"outputId": "c39a7a6d-bc52-4ef9-b36f-e1d138590b79"
},
"outputs": [
{
"data": {
"text/plain": [
"[Document(metadata={'request_id': '06273abd-b2ef-4e15-b0ec-901cba7b4825', 'generated_query': None, 'screenshot': None}, page_content=\"{'posts': [{'title': 'Launch Week Recap—make the web AI-ready', 'url': 'https://www.agentql.com/blog/2024-launch-week-recap', 'date': 'Nov 18, 2024', 'author': 'Rachel-Lee Nabors'}, {'title': 'Accurate data extraction from PDFs and images with AgentQL', 'url': 'https://www.agentql.com/blog/accurate-data-extraction-pdfs-images', 'date': 'Feb 1, 2025', 'author': 'Rachel-Lee Nabors'}, {'title': 'Introducing Scheduled Scraping Workflows', 'url': 'https://www.agentql.com/blog/scheduling', 'date': 'Dec 2, 2024', 'author': 'Rachel-Lee Nabors'}, {'title': 'Updates to Our Pricing Model', 'url': 'https://www.agentql.com/blog/2024-pricing-update', 'date': 'Nov 19, 2024', 'author': 'Rachel-Lee Nabors'}, {'title': 'Get data from any page: AgentQLs REST API Endpoint—Launch week day 5', 'url': 'https://www.agentql.com/blog/data-rest-api', 'date': 'Nov 15, 2024', 'author': 'Rachel-Lee Nabors'}]}\")]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pages = [doc for doc in loader.lazy_load()]\n",
"pages"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For more information on how to use this integration, please refer to the [git repo](https://github.com/tinyfish-io/agentql-integrations/tree/main/langchain) or the [langchain integration documentation](https://docs.agentql.com/integrations/langchain)"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -2,7 +2,9 @@
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"id": "xwiDq5fOuoRn"
},
"source": [
"# Apify Dataset\n",
"\n",
@@ -20,33 +22,63 @@
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "qRW2-mokuoRp",
"tags": []
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet apify-client"
"%pip install --upgrade --quiet langchain langchain-apify langchain-openai"
]
},
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"id": "8jRVq16LuoRq"
},
"source": [
"First, import `ApifyDatasetLoader` into your source code:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"execution_count": 2,
"metadata": {
"id": "umXQHqIJuoRq"
},
"outputs": [],
"source": [
"from langchain_community.document_loaders import ApifyDatasetLoader\n",
"from langchain_apify import ApifyDatasetLoader\n",
"from langchain_core.documents import Document"
]
},
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"id": "NjGwKy59vz1X"
},
"source": [
"Find your [Apify API token](https://console.apify.com/account/integrations) and [OpenAI API key](https://platform.openai.com/account/api-keys) and initialize these into environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "AvzNtyCxwDdr"
},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"APIFY_API_TOKEN\"] = \"your-apify-api-token\"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"your-openai-api-key\""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "d1O-KL48uoRr"
},
"source": [
"Then provide a function that maps Apify dataset record fields to LangChain `Document` format.\n",
"\n",
@@ -64,8 +96,10 @@
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"execution_count": 8,
"metadata": {
"id": "m1SpA7XZuoRr"
},
"outputs": [],
"source": [
"loader = ApifyDatasetLoader(\n",
@@ -78,8 +112,10 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"execution_count": 9,
"metadata": {
"id": "0hWX7ABsuoRs"
},
"outputs": [],
"source": [
"data = loader.load()"
@@ -87,7 +123,9 @@
},
{
"cell_type": "markdown",
"metadata": {},
"metadata": {
"id": "EJCVFVKNuoRs"
},
"source": [
"## An example with question answering\n",
"\n",
@@ -96,21 +134,26 @@
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"execution_count": 14,
"metadata": {
"id": "sNisJKzZuoRt"
},
"outputs": [],
"source": [
"from langchain.indexes import VectorstoreIndexCreator\n",
"from langchain_community.utilities import ApifyWrapper\n",
"from langchain_apify import ApifyWrapper\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAI\n",
"from langchain_core.vectorstores import InMemoryVectorStore\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain_openai.embeddings import OpenAIEmbeddings"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"execution_count": 15,
"metadata": {
"id": "qcfmnbdDuoRu"
},
"outputs": [],
"source": [
"loader = ApifyDatasetLoader(\n",
@@ -123,27 +166,47 @@
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"execution_count": 16,
"metadata": {
"id": "8b0xzKJxuoRv"
},
"outputs": [],
"source": [
"index = VectorstoreIndexCreator(embedding=OpenAIEmbeddings()).from_loaders([loader])"
"index = VectorstoreIndexCreator(\n",
" vectorstore_cls=InMemoryVectorStore, embedding=OpenAIEmbeddings()\n",
").from_loaders([loader])"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"execution_count": 17,
"metadata": {
"id": "7zPXGsVFwUGA"
},
"outputs": [],
"source": [
"llm = ChatOpenAI(model=\"gpt-4o-mini\")"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"id": "ecWrdM4guoRv"
},
"outputs": [],
"source": [
"query = \"What is Apify?\"\n",
"result = index.query_with_sources(query, llm=OpenAI())"
"result = index.query_with_sources(query, llm=llm)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"execution_count": null,
"metadata": {
"id": "QH8r44e9uoRv",
"outputId": "361fe050-f75d-4d5a-c327-5e7bd190fba5"
},
"outputs": [
{
"name": "stdout",
@@ -162,6 +225,9 @@
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
@@ -181,5 +247,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 4
}
"nbformat_minor": 0
}

View File

@@ -521,9 +521,9 @@
"source": [
"## API reference\n",
"\n",
"- [LangChain Docling integration GitHub](https://github.com/DS4SD/docling-langchain)\n",
"- [Docling GitHub](https://github.com/DS4SD/docling)\n",
"- [Docling docs](https://ds4sd.github.io/docling/)"
"- [LangChain Docling integration GitHub](https://github.com/docling-project/docling-langchain)\n",
"- [Docling GitHub](https://github.com/docling-project/docling)\n",
"- [Docling docs](https://docling-project.github.io/docling//)"
]
},
{

View File

@@ -0,0 +1,192 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "db23d51760310705",
"metadata": {},
"source": [
"# Writer PDF Parser\n",
"\n",
"This notebook provides a quick overview for getting started with the Writer `PDFParser` [document loader](/docs/concepts/document_loaders/).\n",
"\n",
"Writer's [PDF Parser](https://dev.writer.com/api-guides/api-reference/tool-api/pdf-parser#parse-pdf) converts PDF documents into other formats like text or Markdown. This is particularly useful when you need to extract and process text content from PDF files for further analysis or integration into your workflow. In `langchain-writer`, we provide usage of Writer's PDF Parser as a LangChain document parser.\n",
"\n",
"## Overview\n",
"\n",
"### Integration details\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"|:-----------------------------------------------------------------------------------------------------------------------------------|:-----------------| :---: | :---: |:----------:|:------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------:|\n",
"| [PDFParser](https://github.com/writer/langchain-writer/blob/main/langchain_writer/pdf_parser.py#L55) | [langchain-writer](https://pypi.org/project/langchain-writer/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-writer?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-writer?style=flat-square&label=%20) |"
]
},
{
"cell_type": "markdown",
"id": "c5f08d23df5dc127",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"The `PDFParser` is available in the `langchain-writer` package:"
]
},
{
"cell_type": "code",
"id": "a8d653f15b7ee32d",
"metadata": {},
"source": [
"%pip install --quiet -U langchain-writer"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "3b9709c26797edf",
"metadata": {},
"source": [
"### Credentials\n",
"\n",
"Sign up for [Writer AI Studio](https://app.writer.com/aistudio/signup?utm_campaign=devrel) to generate an API key (you can follow this [Quickstart](https://dev.writer.com/api-guides/quickstart)). Then, set the WRITER_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"id": "2983e19c9d555e58",
"metadata": {},
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"WRITER_API_KEY\"):\n",
" os.environ[\"WRITER_API_KEY\"] = getpass.getpass(\"Enter your Writer API key: \")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "92a22c77f03d43dc",
"metadata": {},
"source": [
"It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability. If you wish to do so, you can set the `LANGSMITH_TRACING` and `LANGSMITH_API_KEY` environment variables:"
]
},
{
"cell_type": "code",
"id": "98d8422ecee77403",
"metadata": {},
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass()"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "67ab78950a3da8ba",
"metadata": {},
"source": [
"### Instantiation\n",
"\n",
"Next, instantiate an instance of the Writer PDF Parser with the desired output format:"
]
},
{
"cell_type": "code",
"id": "787b3ba8af32533f",
"metadata": {},
"source": [
"from langchain_writer.pdf_parser import PDFParser\n",
"\n",
"parser = PDFParser(format=\"markdown\")"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "d91c6f752fd31cee",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"There are two ways to use the PDF Parser, either synchronously or asynchronously. In either case, the PDF Parser will return a list of `Document` objects, each containing the parsed content of a page from the PDF file.\n",
"\n",
"### Synchronous usage\n",
"\n",
"To invoke the PDF Parser synchronously, pass a `Blob` object to the `parse` method referencing the PDF file you want to parse:"
]
},
{
"cell_type": "code",
"id": "d1a24b81a8a96f09",
"metadata": {},
"source": [
"from langchain_core.documents.base import Blob\n",
"\n",
"file = Blob.from_path(\"../example_data/layout-parser-paper.pdf\")\n",
"\n",
"parsed_pages = parser.parse(blob=file)\n",
"parsed_pages"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "f89c048c7d23807a",
"metadata": {},
"source": [
"### Asynchronous usage\n",
"\n",
"To invoke the PDF Parser asynchronously, pass a `Blob` object to the `aparse` method referencing the PDF file you want to parse:"
]
},
{
"cell_type": "code",
"id": "e2f7fd52b7188c6c",
"metadata": {},
"source": [
"parsed_pages_async = await parser.aparse(blob=file)\n",
"parsed_pages_async"
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "markdown",
"id": "ab25a3bed8437a05",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `PDFParser` features and configurations, head to the [API reference](https://python.langchain.com/api_reference/writer/pdf_parser/langchain_writer.pdf_parser.PDFParser.html#langchain_writer.pdf_parser.PDFParser).\n",
"\n",
"## Additional resources\n",
"You can find information about Writer's models (including costs, context windows, and supported input types) and tools in the [Writer docs](https://dev.writer.com/home).\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,244 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Dell PowerScale Document Loader\n",
"\n",
"[Dell PowerScale](https://www.dell.com/en-us/shop/powerscale-family/sf/powerscale) is an enterprise scale out storage system that hosts industry leading OneFS filesystem that can be hosted on-prem or deployed in the cloud.\n",
"\n",
"This document loader utilizes unique capabilities from PowerScale that can determine what files that have been modified since an application's last run and only returns modified files for processing. This will eliminate the need to re-process (chunk and embed) files that have not been changed, improving the overall data ingestion workflow.\n",
"\n",
"This loader requires PowerScale's MetadataIQ feature enabled. Additional information can be found on our GitHub Repo: [https://github.com/dell/powerscale-rag-connector](https://github.com/dell/powerscale-rag-connector)\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/document_loaders/web_loaders/__module_name___loader)|\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [PowerScaleDocumentLoader](https://github.com/dell/powerscale-rag-connector/blob/main/src/powerscale_rag_connector/PowerScaleDocumentLoader.py) | [powerscale-rag-connector](https://github.com/dell/powerscale-rag-connector) | ✅ | ❌ | ❌ | \n",
"| [PowerScaleUnstructuredLoader](https://github.com/dell/powerscale-rag-connector/blob/main/src/powerscale_rag_connector/PowerScaleUnstructuredLoader.py) | [powerscale-rag-connector](https://github.com/dell/powerscale-rag-connector) | ✅ | ❌ | ❌ | \n",
"### Loader features\n",
"| Source | Document Lazy Loading | Native Async Support\n",
"| :---: | :---: | :---: | \n",
"| PowerScaleDocumentLoader | ✅ | ✅ | \n",
"| PowerScaleUnstructuredLoader | ✅ | ✅ | \n",
"\n",
"## Setup\n",
"\n",
"This document loader requires the use of a Dell PowerScale system with MetadataIQ enabled. Additional information can be found on our github page: [https://github.com/dell/powerscale-rag-connector](https://github.com/dell/powerscale-rag-connector)\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The document loader lives in an external pip package and can be installed using standard tooling"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet powerscale-rag-connector"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"Now we can instantiate document loader:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Generic Document Loader\n",
"\n",
"Our generic document loader can be used to incrementally load all files from PowerScale in the following manner:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from powerscale_rag_connector import PowerScaleDocumentLoader\n",
"\n",
"loader = PowerScaleDocumentLoader(\n",
" es_host_url=\"http://elasticsearch:9200\",\n",
" es_index_name=\"metadataiq\",\n",
" es_api_key=\"your-api-key\",\n",
" folder_path=\"/ifs/data\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### UnstructuredLoader Loader\n",
"\n",
"Optionally, the `PowerScaleUnstructuredLoader` can be used to locate the changed files _and_ automatically process the files producing elements of the source file. This is done using LangChain's `UnstructuredLoader` class."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from powerscale_rag_connector import PowerScaleUnstructuredLoader\n",
"\n",
"# Or load files with the Unstructured Loader\n",
"loader = PowerScaleUnstructuredLoader(\n",
" es_host_url=\"http://elasticsearch:9200\",\n",
" es_index_name=\"metadataiq\",\n",
" es_api_key=\"your-api-key\",\n",
" folder_path=\"/ifs/data\",\n",
" # 'elements' mode splits the document into more granular chunks\n",
" # Use 'single' mode if you want the entire document as a single chunk\n",
" mode=\"elements\",\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The fields:\n",
" - `es_host_url` is the endpoint to to MetadataIQ Elasticsearch database\n",
" - `es_index_index` is the name of the index where PowerScale writes it file system metadata\n",
" - `es_api_key` is the **encoded** version of your elasticsearch API key\n",
" - `folder_path` is the path on PowerScale to be queried for changes"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load\n",
"\n",
"Internally, all code is asynchronous with PowerScale and MetadataIQ and the load and lazy load methods will return a python generator. We recommend using the lazy load function."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='' metadata={'source': '/ifs/pdfs/1994-Graph.Theoretic.Obstacles.to.Perfect.Hashing.TR0257.pdf', 'snapshot': 20834, 'change_types': ['ENTRY_ADDED']}),\n",
"Document(page_content='' metadata={'source': '/ifs/pdfs/New.sendfile-FreeBSD.20.Feb.2015.pdf', 'snapshot': 20920, 'change_types': ['ENTRY_MODIFIED']}),\n",
"Document(page_content='' metadata={'source': '/ifs/pdfs/FAST-Fast.Architecture.Sensitive.Tree.Search.on.Modern.CPUs.and.GPUs-Slides.pdf', 'snapshot': 20924, 'change_types': ['ENTRY_ADDED']})]\n"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"for doc in loader.load():\n",
" print(doc)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Returned Object\n",
"\n",
"Both document loaders will keep track of what files were previously returned to your application. When called again, the document loader will only return new or modified files since your previous run.\n",
"\n",
" - The `metadata` fields in the returned `Document` will return the path on PowerScale that contains the modified file. You will use this path to read the data via NFS (or S3) and process the data in your application (e.g.: create chunks and embedding). \n",
" - The `source` field is the path on PowerScale and not necessarily on your local system (depending on your mount strategy); OneFS expresses the entire storage system as a single tree rooted at `/ifs`.\n",
" - The `change_types` property will inform you on what change occurred since the last one - e.g.: new, modified or delete.\n",
"\n",
"Your RAG application can use the information from `change_types` to add, update or delete entries your chunk and vector store.\n",
"\n",
"When using `PowerScaleUnstructuredLoader` the `page_content` field will be filled with data from the Unstructured Loader"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load\n",
"\n",
"Internally, all code is asynchronous with PowerScale and MetadataIQ and the load and lazy load methods will return a python generator. We recommend using the lazy load function."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for doc in loader.lazy_load():\n",
" print(doc) # do something specific with the document"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The same `Document` is returned as the load function with all the same properties mentioned above."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Additional Examples\n",
"\n",
"Additional examples and code can be found on our public github webpage: [https://github.com/dell/powerscale-rag-connector/tree/main/examples](https://github.com/dell/powerscale-rag-connector/tree/main/examples) that provide full working examples. \n",
"\n",
" - [PowerScale LangChain Document Loader](https://github.com/dell/powerscale-rag-connector/blob/main/examples/powerscale_langchain_doc_loader.py) - Working example of our standard document loader\n",
" - [PowerScale LangChain Unstructured Loader](https://github.com/dell/powerscale-rag-connector/blob/main/examples/powerscale_langchain_unstructured_loader.py) - Working example of our standard document loader using unstructured loader for chunking and embedding\n",
" - [PowerScale NVIDIA Retriever Microservice Loader](https://github.com/dell/powerscale-rag-connector/blob/main/examples/powerscale_nvingest_example.py) - Working example of our document loader with NVIDIA NeMo Retriever microservices for chunking and embedding"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all PowerScale Document Loader features and configurations head to the github page: [https://github.com/dell/powerscale-rag-connector/](https://github.com/dell/powerscale-rag-connector/)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,721 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: PyMuPDF4LLM\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# PyMuPDF4LLMLoader\n",
"\n",
"This notebook provides a quick overview for getting started with PyMuPDF4LLM [document loader](https://python.langchain.com/docs/concepts/#document-loaders). For detailed documentation of all PyMuPDF4LLMLoader features and configurations head to the [GitHub repository](https://github.com/lakinduboteju/langchain-pymupdf4llm).\n",
"\n",
"## Overview\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support |\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [PyMuPDF4LLMLoader](https://github.com/lakinduboteju/langchain-pymupdf4llm) | [langchain_pymupdf4llm](https://pypi.org/project/langchain-pymupdf4llm) | ✅ | ❌ | ❌ |\n",
"\n",
"### Loader features\n",
"\n",
"| Source | Document Lazy Loading | Native Async Support | Extract Images | Extract Tables |\n",
"| :---: | :---: | :---: | :---: | :---: |\n",
"| PyMuPDF4LLMLoader | ✅ | ❌ | ✅ | ✅ |\n",
"\n",
"## Setup\n",
"\n",
"To access PyMuPDF4LLM document loader you'll need to install the `langchain-pymupdf4llm` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"No credentials are required to use PyMuPDF4LLMLoader."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated best in-class tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Install **langchain_community** and **langchain-pymupdf4llm**."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain_community langchain-pymupdf4llm"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"Now we can instantiate our model object and load documents:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"from langchain_pymupdf4llm import PyMuPDF4LLMLoader\n",
"\n",
"file_path = \"./example_data/layout-parser-paper.pdf\"\n",
"loader = PyMuPDF4LLMLoader(file_path)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'producer': 'pdfTeX-1.40.21', 'creator': 'LaTeX with hyperref', 'creationdate': '2021-06-22T01:27:10+00:00', 'source': './example_data/layout-parser-paper.pdf', 'file_path': './example_data/layout-parser-paper.pdf', 'total_pages': 16, 'format': 'PDF 1.5', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'moddate': '2021-06-22T01:27:10+00:00', 'trapped': '', 'modDate': 'D:20210622012710Z', 'creationDate': 'D:20210622012710Z', 'page': 0}, page_content='```\\nLayoutParser: A Unified Toolkit for Deep\\n\\n## Learning Based Document Image Analysis\\n\\n```\\n\\nZejiang Shen[1] (<28>), Ruochen Zhang[2], Melissa Dell[3], Benjamin Charles Germain\\nLee[4], Jacob Carlson[3], and Weining Li[5]\\n\\n1 Allen Institute for AI\\n```\\n shannons@allenai.org\\n\\n```\\n2 Brown University\\n```\\n ruochen zhang@brown.edu\\n\\n```\\n3 Harvard University\\n_{melissadell,jacob carlson}@fas.harvard.edu_\\n4 University of Washington\\n```\\n bcgl@cs.washington.edu\\n\\n```\\n5 University of Waterloo\\n```\\n w422li@uwaterloo.ca\\n\\n```\\n\\n**Abstract. Recent advances in document image analysis (DIA) have been**\\nprimarily driven by the application of neural networks. Ideally, research\\noutcomes could be easily deployed in production and extended for further\\ninvestigation. However, various factors like loosely organized codebases\\nand sophisticated model configurations complicate the easy reuse of important innovations by a wide audience. Though there have been on-going\\nefforts to improve reusability and simplify deep learning (DL) model\\ndevelopment in disciplines like natural language processing and computer\\nvision, none of them are optimized for challenges in the domain of DIA.\\nThis represents a major gap in the existing toolkit, as DIA is central to\\nacademic research across a wide range of disciplines in the social sciences\\nand humanities. This paper introduces LayoutParser, an open-source\\nlibrary for streamlining the usage of DL in DIA research and applications. The core LayoutParser library comes with a set of simple and\\nintuitive interfaces for applying and customizing DL models for layout detection, character recognition, and many other document processing tasks.\\nTo promote extensibility, LayoutParser also incorporates a community\\nplatform for sharing both pre-trained models and full document digitization pipelines. We demonstrate that LayoutParser is helpful for both\\nlightweight and large-scale digitization pipelines in real-word use cases.\\n[The library is publicly available at https://layout-parser.github.io.](https://layout-parser.github.io)\\n\\n**Keywords: Document Image Analysis · Deep Learning · Layout Analysis**\\n\\n - Character Recognition · Open Source library · Toolkit.\\n\\n### 1 Introduction\\n\\n\\nDeep Learning(DL)-based approaches are the state-of-the-art for a wide range of\\ndocument image analysis (DIA) tasks including document image classification [11,\\n\\n')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs = loader.load()\n",
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'producer': 'pdfTeX-1.40.21',\n",
" 'creator': 'LaTeX with hyperref',\n",
" 'creationdate': '2021-06-22T01:27:10+00:00',\n",
" 'source': './example_data/layout-parser-paper.pdf',\n",
" 'file_path': './example_data/layout-parser-paper.pdf',\n",
" 'total_pages': 16,\n",
" 'format': 'PDF 1.5',\n",
" 'title': '',\n",
" 'author': '',\n",
" 'subject': '',\n",
" 'keywords': '',\n",
" 'moddate': '2021-06-22T01:27:10+00:00',\n",
" 'trapped': '',\n",
" 'modDate': 'D:20210622012710Z',\n",
" 'creationDate': 'D:20210622012710Z',\n",
" 'page': 0}\n"
]
}
],
"source": [
"import pprint\n",
"\n",
"pprint.pp(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"6"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pages = []\n",
"for doc in loader.lazy_load():\n",
" pages.append(doc)\n",
" if len(pages) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(page)\n",
"\n",
" pages = []\n",
"len(pages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.display import Markdown, display\n",
"\n",
"part = pages[0].page_content[778:1189]\n",
"print(part)\n",
"# Markdown rendering\n",
"display(Markdown(part))"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'producer': 'pdfTeX-1.40.21',\n",
" 'creator': 'LaTeX with hyperref',\n",
" 'creationdate': '2021-06-22T01:27:10+00:00',\n",
" 'source': './example_data/layout-parser-paper.pdf',\n",
" 'file_path': './example_data/layout-parser-paper.pdf',\n",
" 'total_pages': 16,\n",
" 'format': 'PDF 1.5',\n",
" 'title': '',\n",
" 'author': '',\n",
" 'subject': '',\n",
" 'keywords': '',\n",
" 'moddate': '2021-06-22T01:27:10+00:00',\n",
" 'trapped': '',\n",
" 'modDate': 'D:20210622012710Z',\n",
" 'creationDate': 'D:20210622012710Z',\n",
" 'page': 10}\n"
]
}
],
"source": [
"pprint.pp(pages[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The metadata attribute contains at least the following keys:\n",
"- source\n",
"- page (if in mode *page*)\n",
"- total_page\n",
"- creationdate\n",
"- creator\n",
"- producer\n",
"\n",
"Additional metadata are specific to each parser.\n",
"These pieces of information can be helpful (to categorize your PDFs for example)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Splitting mode & custom pages delimiter"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"When loading the PDF file you can split it in two different ways:\n",
"- By page\n",
"- As a single text flow\n",
"\n",
"By default PyMuPDF4LLMLoader will split the PDF by page."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extract the PDF by page. Each page is extracted as a langchain Document object:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"16\n",
"{'producer': 'pdfTeX-1.40.21',\n",
" 'creator': 'LaTeX with hyperref',\n",
" 'creationdate': '2021-06-22T01:27:10+00:00',\n",
" 'source': './example_data/layout-parser-paper.pdf',\n",
" 'file_path': './example_data/layout-parser-paper.pdf',\n",
" 'total_pages': 16,\n",
" 'format': 'PDF 1.5',\n",
" 'title': '',\n",
" 'author': '',\n",
" 'subject': '',\n",
" 'keywords': '',\n",
" 'moddate': '2021-06-22T01:27:10+00:00',\n",
" 'trapped': '',\n",
" 'modDate': 'D:20210622012710Z',\n",
" 'creationDate': 'D:20210622012710Z',\n",
" 'page': 0}\n"
]
}
],
"source": [
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"page\",\n",
")\n",
"docs = loader.load()\n",
"\n",
"print(len(docs))\n",
"pprint.pp(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In this mode the pdf is split by pages and the resulting Documents metadata contains the `page` (page number). But in some cases we could want to process the pdf as a single text flow (so we don't cut some paragraphs in half). In this case you can use the *single* mode :"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extract the whole PDF as a single langchain Document object:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"1\n",
"{'producer': 'pdfTeX-1.40.21',\n",
" 'creator': 'LaTeX with hyperref',\n",
" 'creationdate': '2021-06-22T01:27:10+00:00',\n",
" 'source': './example_data/layout-parser-paper.pdf',\n",
" 'file_path': './example_data/layout-parser-paper.pdf',\n",
" 'total_pages': 16,\n",
" 'format': 'PDF 1.5',\n",
" 'title': '',\n",
" 'author': '',\n",
" 'subject': '',\n",
" 'keywords': '',\n",
" 'moddate': '2021-06-22T01:27:10+00:00',\n",
" 'trapped': '',\n",
" 'modDate': 'D:20210622012710Z',\n",
" 'creationDate': 'D:20210622012710Z'}\n"
]
}
],
"source": [
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"single\",\n",
")\n",
"docs = loader.load()\n",
"\n",
"print(len(docs))\n",
"pprint.pp(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Logically, in this mode, the `page` (page_number) metadata disappears. Here's how to clearly identify where pages end in the text flow :"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Add a custom *pages_delimiter* to identify where are ends of pages in *single* mode:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"single\",\n",
" pages_delimiter=\"\\n-------THIS IS A CUSTOM END OF PAGE-------\\n\\n\",\n",
")\n",
"docs = loader.load()\n",
"\n",
"part = docs[0].page_content[10663:11317]\n",
"print(part)\n",
"display(Markdown(part))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The default `pages_delimiter` is \\n-----\\n\\n.\n",
"But this could simply be \\n, or \\f to clearly indicate a page change, or \\<!-- PAGE BREAK --> for seamless injection in a Markdown viewer without a visual effect."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Extract images from the PDF"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can extract images from your PDFs (in text form) with a choice of three different solutions:\n",
"- rapidOCR (lightweight Optical Character Recognition tool)\n",
"- Tesseract (OCR tool with high precision)\n",
"- Multimodal language model\n",
"\n",
"The result is inserted at the end of text of the page."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extract images from the PDF with rapidOCR:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU rapidocr-onnxruntime pillow"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.parsers import RapidOCRBlobParser\n",
"\n",
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"page\",\n",
" extract_images=True,\n",
" images_parser=RapidOCRBlobParser(),\n",
")\n",
"docs = loader.load()\n",
"\n",
"part = docs[5].page_content[1863:]\n",
"print(part)\n",
"display(Markdown(part))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Be careful, RapidOCR is designed to work with Chinese and English, not other languages."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extract images from the PDF with Tesseract:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU pytesseract"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.parsers import TesseractBlobParser\n",
"\n",
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"page\",\n",
" extract_images=True,\n",
" images_parser=TesseractBlobParser(),\n",
")\n",
"docs = loader.load()\n",
"\n",
"print(docs[5].page_content[1863:])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Extract images from the PDF with multimodal model:"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain_openai"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import os\n",
"\n",
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv()"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"from getpass import getpass\n",
"\n",
"if not os.environ.get(\"OPENAI_API_KEY\"):\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass(\"OpenAI API key =\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.parsers import LLMImageBlobParser\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"page\",\n",
" extract_images=True,\n",
" images_parser=LLMImageBlobParser(\n",
" model=ChatOpenAI(model=\"gpt-4o-mini\", max_tokens=1024)\n",
" ),\n",
")\n",
"docs = loader.load()\n",
"\n",
"print(docs[5].page_content[1863:])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Extract tables from the PDF\n",
"\n",
"With PyMUPDF4LLM you can extract tables from your PDFs in *markdown* format :"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"loader = PyMuPDF4LLMLoader(\n",
" \"./example_data/layout-parser-paper.pdf\",\n",
" mode=\"page\",\n",
" # \"lines_strict\" is the default strategy and\n",
" # is the most accurate for tables with column and row lines,\n",
" # but may not work well with all documents.\n",
" # \"lines\" is a less strict strategy that may work better with\n",
" # some documents.\n",
" # \"text\" is the least strict strategy and may work better\n",
" # with documents that do not have tables with lines.\n",
" table_strategy=\"lines\",\n",
")\n",
"docs = loader.load()\n",
"\n",
"part = docs[4].page_content[3210:]\n",
"print(part)\n",
"display(Markdown(part))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Working with Files\n",
"\n",
"Many document loaders involve parsing files. The difference between such loaders usually stems from how the file is parsed, rather than how the file is loaded. For example, you can use `open` to read the binary content of either a PDF or a markdown file, but you need different parsing logic to convert that binary data into text.\n",
"\n",
"As a result, it can be helpful to decouple the parsing logic from the loading logic, which makes it easier to re-use a given parser regardless of how the data was loaded.\n",
"You can use this strategy to analyze different files, with the same parsing parameters."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import FileSystemBlobLoader\n",
"from langchain_community.document_loaders.generic import GenericLoader\n",
"from langchain_pymupdf4llm import PyMuPDF4LLMParser\n",
"\n",
"loader = GenericLoader(\n",
" blob_loader=FileSystemBlobLoader(\n",
" path=\"./example_data/\",\n",
" glob=\"*.pdf\",\n",
" ),\n",
" blob_parser=PyMuPDF4LLMParser(),\n",
")\n",
"docs = loader.load()\n",
"\n",
"part = docs[0].page_content[:562]\n",
"print(part)\n",
"display(Markdown(part))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all PyMuPDF4LLMLoader features and configurations head to the GitHub repository: https://github.com/lakinduboteju/langchain-pymupdf4llm"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.21"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -546,7 +546,7 @@
"id": "ud_cnGszb1i9"
},
"source": [
"Let's inspect a couple of reranked documents. We observe that the retriever still returns the relevant Langchain type [documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) but as part of the metadata field, we also recieve the `relevance_score` from the Ranking API."
"Let's inspect a couple of reranked documents. We observe that the retriever still returns the relevant Langchain type [documents](https://python.langchain.com/api_reference/core/documents/langchain_core.documents.base.Document.html) but as part of the metadata field, we also receive the `relevance_score` from the Ranking API."
]
},
{

View File

@@ -88,13 +88,13 @@
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"documents = TextLoader(\"../../modules/state_of_the_union.txt\").load()\n",
"documents = TextLoader(\"../document_loaders/example_data/state_of_the_union.txt\").load()\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
"texts = text_splitter.split_documents(documents)\n",
"for idx, text in enumerate(texts):\n",
" text.metadata[\"id\"] = idx\n",
"\n",
"embedding = OpenAIEmbeddings(model=\"text-embedding-ada-002\")\n",
"embedding = OpenAIEmbeddings(model=\"text-embedding-3-large\")\n",
"retriever = FAISS.from_documents(texts, embedding).as_retriever(search_kwargs={\"k\": 20})"
]
},
@@ -114,25 +114,22 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2025-02-17 04:37:08,458 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny. \n",
"\n",
"Six days ago, Russias Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n",
"\n",
"He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n",
"\n",
"He met the Ukrainian people.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"Together with our allies we are right now enforcing powerful economic sanctions. \n",
"\n",
"We are cutting off Russias largest banks from the international financial system. \n",
@@ -141,12 +138,22 @@
"\n",
"We are choking off Russias access to technology that will sap its economic strength and weaken its military for years to come.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"Document 2:\n",
"\n",
"And tonight I am announcing that we will join our allies in closing off American air space to all Russian flights further isolating Russia and adding an additional squeeze on their economy. The Ruble has lost 30% of its value. \n",
"\n",
"The Russian stock market has lost 40% of its value and trading remains suspended. Russias economy is reeling and Putin alone is to blame.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"And now that he has acted the free world is holding him accountable. \n",
"\n",
"Along with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland. \n",
"\n",
"We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n",
"\n",
"Together with our allies we are right now enforcing powerful economic sanctions.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 4:\n",
"\n",
"I spent countless hours unifying our European allies. We shared with the world in advance what we knew Putin was planning and precisely how he would try to falsely justify his aggression. \n",
@@ -167,50 +174,24 @@
"----------------------------------------------------------------------------------------------------\n",
"Document 6:\n",
"\n",
"And now that he has acted the free world is holding him accountable. \n",
"\n",
"Along with twenty-seven members of the European Union including France, Germany, Italy, as well as countries like the United Kingdom, Canada, Japan, Korea, Australia, New Zealand, and many others, even Switzerland. \n",
"\n",
"We are inflicting pain on Russia and supporting the people of Ukraine. Putin is now isolated from the world more than ever. \n",
"\n",
"Together with our allies we are right now enforcing powerful economic sanctions.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 7:\n",
"\n",
"To all Americans, I will be honest with you, as Ive always promised. A Russian dictator, invading a foreign country, has costs around the world. \n",
"\n",
"And Im taking robust action to make sure the pain of our sanctions is targeted at Russias economy. And I will use every tool at our disposal to protect American businesses and consumers. \n",
"\n",
"Tonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 7:\n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny. \n",
"\n",
"Six days ago, Russias Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n",
"\n",
"He thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n",
"\n",
"He met the Ukrainian people.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 8:\n",
"\n",
"And we remain clear-eyed. The Ukrainians are fighting back with pure courage. But the next few days weeks, months, will be hard on them. \n",
"\n",
"Putin has unleashed violence and chaos. But while he may make gains on the battlefield he will pay a continuing high price over the long run. \n",
"\n",
"And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 9:\n",
"\n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 10:\n",
"\n",
"America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n",
"\n",
"These steps will help blunt gas prices here at home. And I know the news about whats happening can seem alarming. \n",
"\n",
"But I want you to know that we are going to be okay. \n",
"\n",
"When the history of this era is written Putins war on Ukraine will have left Russia weaker and the rest of the world stronger.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 11:\n",
"\n",
"They keep moving. \n",
"\n",
"And the costs and the threats to America and the world keep rising. \n",
@@ -225,51 +206,39 @@
"\n",
"He rejected repeated efforts at diplomacy.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 12:\n",
"\n",
"Our forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies in the event that Putin decides to keep moving west. \n",
"\n",
"For that purpose weve mobilized American ground forces, air squadrons, and ship deployments to protect NATO countries including Poland, Romania, Latvia, Lithuania, and Estonia. \n",
"\n",
"As I have made crystal clear the United States and our Allies will defend every inch of territory of NATO countries with the full force of our collective power.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 13:\n",
"Document 9:\n",
"\n",
"While it shouldnt have taken something so terrible for people around the world to see whats at stake now everyone sees it clearly. \n",
"\n",
"We see the unity among leaders of nations and a more unified Europe a more unified West. And we see unity among the people who are gathering in cities in large crowds around the world even in Russia to demonstrate their support for Ukraine.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 14:\n",
"Document 10:\n",
"\n",
"He met the Ukrainian people. \n",
"America will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n",
"\n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"These steps will help blunt gas prices here at home. And I know the news about whats happening can seem alarming. \n",
"\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"But I want you to know that we are going to be okay. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"When the history of this era is written Putins war on Ukraine will have left Russia weaker and the rest of the world stronger.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 15:\n",
"Document 11:\n",
"\n",
"In the battle between democracy and autocracy, democracies are rising to the moment, and the world is clearly choosing the side of peace and security. \n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"This is a real test. Its going to take time. So let us continue to draw inspiration from the iron will of the Ukrainian people. \n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"To our fellow Ukrainian Americans who forge a deep bond that connects our two nations we stand with you. \n",
"\n",
"Putin may circle Kyiv with tanks, but he will never gain the hearts and souls of the Ukrainian people.\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 16:\n",
"Document 12:\n",
"\n",
"Together with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \n",
"And we remain clear-eyed. The Ukrainians are fighting back with pure courage. But the next few days weeks, months, will be hard on them. \n",
"\n",
"We are giving more than $1 Billion in direct assistance to Ukraine. \n",
"Putin has unleashed violence and chaos. But while he may make gains on the battlefield he will pay a continuing high price over the long run. \n",
"\n",
"And we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \n",
"\n",
"Let me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine.\n",
"And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 17:\n",
"Document 13:\n",
"\n",
"Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n",
"\n",
@@ -281,7 +250,17 @@
"\n",
"And the costs and the threats to America and the world keep rising.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 18:\n",
"Document 14:\n",
"\n",
"Together with our allies we are providing support to the Ukrainians in their fight for freedom. Military assistance. Economic assistance. Humanitarian assistance. \n",
"\n",
"We are giving more than $1 Billion in direct assistance to Ukraine. \n",
"\n",
"And we will continue to aid the Ukrainian people as they defend their country and to help ease their suffering. \n",
"\n",
"Let me be clear, our forces are not engaged and will not engage in conflict with Russian forces in Ukraine.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 15:\n",
"\n",
"It fueled our efforts to vaccinate the nation and combat COVID-19. It delivered immediate economic relief for tens of millions of Americans. \n",
"\n",
@@ -289,25 +268,56 @@
"\n",
"And as my Dad used to say, it gave people a little breathing room.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 16:\n",
"\n",
"In the battle between democracy and autocracy, democracies are rising to the moment, and the world is clearly choosing the side of peace and security. \n",
"\n",
"This is a real test. Its going to take time. So let us continue to draw inspiration from the iron will of the Ukrainian people. \n",
"\n",
"To our fellow Ukrainian Americans who forge a deep bond that connects our two nations we stand with you. \n",
"\n",
"Putin may circle Kyiv with tanks, but he will never gain the hearts and souls of the Ukrainian people.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 17:\n",
"\n",
"He met the Ukrainian people. \n",
"\n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 18:\n",
"\n",
"Our forces are not going to Europe to fight in Ukraine, but to defend our NATO Allies in the event that Putin decides to keep moving west. \n",
"\n",
"For that purpose weve mobilized American ground forces, air squadrons, and ship deployments to protect NATO countries including Poland, Romania, Latvia, Lithuania, and Estonia. \n",
"\n",
"As I have made crystal clear the United States and our Allies will defend every inch of territory of NATO countries with the full force of our collective power.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 19:\n",
"\n",
"My administration is providing assistance with job training and housing, and now helping lower-income veterans get VA care debt-free. \n",
"I understand. \n",
"\n",
"Our troops in Iraq and Afghanistan faced many dangers. \n",
"I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \n",
"\n",
"One was stationed at bases and breathing in toxic smoke from “burn pits” that incinerated wastes of war—medical and hazard material, jet fuel, and more. \n",
"Thats why one of the first things I did as President was fight to pass the American Rescue Plan. \n",
"\n",
"When they came home, many of the worlds fittest and best trained warriors were never the same. \n",
"Because people were hurting. We needed to act, and we did. \n",
"\n",
"Headaches. Numbness. Dizziness.\n",
"Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 20:\n",
"\n",
"Every Administration says theyll do it, but we are actually doing it. \n",
"And as my Dad used to say, it gave people a little breathing room. \n",
"\n",
"We will buy American to make sure everything from the deck of an aircraft carrier to the steel on highway guardrails are made in America. \n",
"And unlike the $2 Trillion tax cut passed in the previous administration that benefitted the top 1% of Americans, the American Rescue Plan helped working people—and left no one behind. \n",
"\n",
"But to compete for the best jobs of the future, we also need to level the playing field with China and other competitors.\n"
"And it worked. It created jobs. Lots of jobs. \n",
"\n",
"In fact—our economy created over 6.5 Million new jobs just last year, more jobs created in one year \n",
"than ever before in the history of America.\n"
]
}
],
@@ -456,34 +466,35 @@
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"We cannot let this happen. \n",
"\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 4:\n",
"\n",
"He met the Ukrainian people. \n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"\n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 5:\n",
"\n",
"But that trickle-down theory led to weaker economic growth, lower wages, bigger deficits, and the widest gap between those at the top and everyone else in nearly a century. \n",
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
"\n",
"Vice President Harris and I ran for office with a new economic vision for America. \n",
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
"\n",
"Invest in America. Educate Americans. Grow the workforce. Build the economy from the bottom up \n",
"and the middle out, not from the top down.\n",
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
"\n",
"With a duty to one another to the American people to the Constitution. \n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 6:\n",
"\n",
@@ -501,6 +512,14 @@
"----------------------------------------------------------------------------------------------------\n",
"Document 7:\n",
"\n",
"So thats my plan. It will grow the economy and lower costs for families. \n",
"\n",
"So what are we waiting for? Lets get this done. And while youre at it, confirm my nominees to the Federal Reserve, which plays a critical role in fighting inflation. \n",
"\n",
"My plan will not only lower costs to give families a fair shot, it will lower the deficit.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 8:\n",
"\n",
"I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves. \n",
"\n",
"Ive worked on these issues a long time. \n",
@@ -509,7 +528,15 @@
"\n",
"So lets not abandon our streets. Or choose between safety and equal justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 8:\n",
"Document 9:\n",
"\n",
"So lets not abandon our streets. Or choose between safety and equal justice. \n",
"\n",
"Lets come together to protect our communities, restore trust, and hold law enforcement accountable. \n",
"\n",
"Thats why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 10:\n",
"\n",
"As Ive told Xi Jinping, it is never a good bet to bet against the American people. \n",
"\n",
@@ -517,60 +544,18 @@
"\n",
"And well do it all to withstand the devastating effects of the climate crisis and promote environmental justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 9:\n",
"\n",
"Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n",
"\n",
"Last year COVID-19 kept us apart. This year we are finally together again. \n",
"\n",
"Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n",
"\n",
"With a duty to one another to the American people to the Constitution. \n",
"\n",
"And with an unwavering resolve that freedom will always triumph over tyranny.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 10:\n",
"\n",
"As Ohio Senator Sherrod Brown says, “Its time to bury the label “Rust Belt.” \n",
"\n",
"Its time. \n",
"\n",
"But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. \n",
"\n",
"Inflation is robbing them of the gains they might otherwise feel. \n",
"\n",
"I get it. Thats why my top priority is getting prices under control.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 11:\n",
"\n",
"Im also calling on Congress: pass a law to make sure veterans devastated by toxic exposures in Iraq and Afghanistan finally get the benefits and comprehensive health care they deserve. \n",
"Lets pass the Paycheck Fairness Act and paid leave. \n",
"\n",
"And fourth, lets end cancer as we know it. \n",
"Raise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n",
"\n",
"This is personal to me and Jill, to Kamala, and to so many of you. \n",
"Lets increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls Americas best-kept secret: community colleges. \n",
"\n",
"Cancer is the #2 cause of death in Americasecond only to heart disease.\n",
"And lets pass the PRO Act when a majority of workers want to form a union—they shouldnt be stopped.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 12:\n",
"\n",
"Headaches. Numbness. Dizziness. \n",
"\n",
"A cancer that would put them in a flag-draped coffin. \n",
"\n",
"I know. \n",
"\n",
"One of those soldiers was my son Major Beau Biden. \n",
"\n",
"We dont know for sure if a burn pit was the cause of his brain cancer, or the diseases of so many of our troops. \n",
"\n",
"But Im committed to finding out everything we can. \n",
"\n",
"Committed to military families like Danielle Robinson from Ohio. \n",
"\n",
"The widow of Sergeant First Class Heath Robinson.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 13:\n",
"\n",
"He will never extinguish their love of freedom. He will never weaken the resolve of the free world. \n",
"\n",
"We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. \n",
@@ -581,73 +566,105 @@
"\n",
"I understand.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 13:\n",
"\n",
"Well I know this nation. \n",
"\n",
"We will meet the test. \n",
"\n",
"To protect freedom and liberty, to expand fairness and opportunity. \n",
"\n",
"We will save democracy. \n",
"\n",
"As hard as these times have been, I am more optimistic about America today than I have been my whole life. \n",
"\n",
"Because I see the future that is within our grasp. \n",
"\n",
"Because I know there is simply nothing beyond our capacity. \n",
"\n",
"We are the only nation on Earth that has always turned every crisis we have faced into an opportunity.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 14:\n",
"\n",
"When we invest in our workers, when we build the economy from the bottom up and the middle out together, we can do something we havent done in a long time: build a better America. \n",
"If we want to go forward—not backward—we must protect access to health care. Preserve a womans right to choose. And lets continue to advance maternal health care in America. \n",
"\n",
"For more than two years, COVID-19 has impacted every decision in our lives and the life of the nation. \n",
"\n",
"And I know youre tired, frustrated, and exhausted. \n",
"\n",
"But I also know this.\n",
"And for our LGBTQ+ Americans, lets finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 15:\n",
"\n",
"My plan to fight inflation will lower your costs and lower the deficit. \n",
"He met the Ukrainian people. \n",
"\n",
"17 Nobel laureates in economics say my plan will ease long-term inflationary pressures. Top business leaders and most Americans support my plan. And heres the plan: \n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"\n",
"First cut the cost of prescription drugs. Just look at insulin. One in ten Americans has diabetes. In Virginia, I met a 13-year-old boy named Joshua Davis.\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 16:\n",
"\n",
"And soon, well strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"So tonight Im offering a Unity Agenda for the Nation. Four big things we can do together. \n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"First, beat the opioid epidemic. \n",
"\n",
"There is so much we can do. Increase funding for prevention, treatment, harm reduction, and recovery.\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 17:\n",
"\n",
"My plan will not only lower costs to give families a fair shot, it will lower the deficit. \n",
"Its not only the right thing to do—its the economically smart thing to do. \n",
"\n",
"The previous Administration not only ballooned the deficit with tax cuts for the very wealthy and corporations, it undermined the watchdogs whose job was to keep pandemic relief funds from being wasted. \n",
"Thats why immigration reform is supported by everyone from labor unions to religious leaders to the U.S. Chamber of Commerce. \n",
"\n",
"But in my administration, the watchdogs have been welcomed back. \n",
"Lets get it done once and for all. \n",
"\n",
"Were going after the criminals who stole billions in relief money meant for small businesses and millions of Americans.\n",
"Advancing liberty and justice also requires protecting the rights of women. \n",
"\n",
"The constitutional right affirmed in Roe v. Wade—standing precedent for half a century—is under attack as never before.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 18:\n",
"\n",
"So lets not abandon our streets. Or choose between safety and equal justice. \n",
"Smartphones. The Internet. Technology we have yet to invent. \n",
"\n",
"Lets come together to protect our communities, restore trust, and hold law enforcement accountable. \n",
"But thats just the beginning. \n",
"\n",
"Thats why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.\n",
"Intels CEO, Pat Gelsinger, who is here tonight, told me they are ready to increase their investment from \n",
"$20 billion to $100 billion. \n",
"\n",
"That would be one of the biggest investments in manufacturing in American history. \n",
"\n",
"And all theyre waiting for is for you to pass this bill. \n",
"\n",
"So lets not wait any longer. Send it to my desk. Ill sign it. \n",
"\n",
"And we will really take off.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 19:\n",
"\n",
"I understand. \n",
"And as my Dad used to say, it gave people a little breathing room. \n",
"\n",
"I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \n",
"And unlike the $2 Trillion tax cut passed in the previous administration that benefitted the top 1% of Americans, the American Rescue Plan helped working people—and left no one behind. \n",
"\n",
"Thats why one of the first things I did as President was fight to pass the American Rescue Plan. \n",
"And it worked. It created jobs. Lots of jobs. \n",
"\n",
"Because people were hurting. We needed to act, and we did. \n",
"\n",
"Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.\n",
"In fact—our economy created over 6.5 Million new jobs just last year, more jobs created in one year \n",
"than ever before in the history of America.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 20:\n",
"\n",
"The only nation that can be defined by a single word: possibilities. \n",
"\n",
"So on this night, in our 245th year as a nation, I have come to report on the State of the Union. \n",
"\n",
"And my report is this: the State of the Union is strong—because you, the American people, are strong. \n",
"\n",
"We are stronger today than we were a year ago. \n",
"\n",
"And we will be stronger a year from now than we are today. \n",
"\n",
"Now is our moment to meet and overcome the challenges of our time. \n",
"\n",
"And we will, as one people. \n",
"\n",
"One America. \n",
"\n",
"The United States of America. \n",
"\n",
"May God bless you all. May God protect our troops.\n"
"One America.\n"
]
}
],
@@ -666,14 +683,14 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"from langchain.retrievers.contextual_compression import ContextualCompressionRetriever\n",
"from langchain_community.document_compressors.rankllm_rerank import RankLLMRerank\n",
"\n",
"compressor = RankLLMRerank(top_n=3, model=\"gpt\", gpt_model=\"gpt-3.5-turbo\")\n",
"compressor = RankLLMRerank(top_n=3, model=\"gpt\", gpt_model=\"gpt-4o-mini\")\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")"
@@ -702,9 +719,11 @@
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"We cannot let this happen. \n",
"\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n"
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\n"
]
},
{
@@ -729,17 +748,29 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/tmp/ipykernel_2153001/1437145854.py:10: LangChainDeprecationWarning: The method `Chain.__call__` was deprecated in langchain 0.1.0 and will be removed in 1.0. Use :meth:`~invoke` instead.\n",
" chain({\"query\": query})\n",
"2025-02-17 04:30:00,016 - INFO - HTTP Request: POST https://api.openai.com/v1/embeddings \"HTTP/1.1 200 OK\"\n",
" 0%| | 0/1 [00:00<?, ?it/s]2025-02-17 04:30:01,649 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n",
"100%|██████████| 1/1 [00:01<00:00, 1.63s/it]\n",
"2025-02-17 04:30:02,415 - INFO - HTTP Request: POST https://api.openai.com/v1/chat/completions \"HTTP/1.1 200 OK\"\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'What did the president say about Ketanji Brown Jackson',\n",
" 'result': \"The President mentioned that Ketanji Brown Jackson is one of the nation's top legal minds and that she will continue Justice Breyer's legacy of excellence. He highlighted her background as a former top litigator in private practice and a former federal public defender, as well as coming from a family of public school educators and police officers. He also mentioned that since her nomination, she has received broad support from various groups, including the Fraternal Order of Police and former judges appointed by Democrats and Republicans.\"}"
" 'result': \"The President mentioned that Ketanji Brown Jackson is one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, and comes from a family of public school educators and police officers. He also highlighted her as a consensus builder and noted the broad range of support she has received since being nominated.\"}"
]
},
"execution_count": 14,
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}

View File

@@ -195,7 +195,7 @@
"id": "96ed13d4",
"metadata": {},
"source": [
"Instead of `model_id`, you can also pass the `deployment_id` of the previously tuned model. The entire model tuning workflow is described [here](https://ibm.github.io/watsonx-ai-python-sdk/pt_working_with_class_and_prompt_tuner.html)."
"Instead of `model_id`, you can also pass the `deployment_id` of the previously tuned model. The entire model tuning workflow is described in [Working with TuneExperiment and PromptTuner](https://ibm.github.io/watsonx-ai-python-sdk/pt_tune_experiment_run.html)."
]
},
{
@@ -420,7 +420,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "langchain_ibm",
"language": "python",
"name": "python3"
},

View File

@@ -221,7 +221,7 @@
"source": [
"## JSONFormer LLM Wrapper\n",
"\n",
"Let's try that again, now providing a the Action input's JSON Schema to the model."
"Let's try that again, now providing the Action input's JSON Schema to the model."
]
},
{

View File

@@ -64,8 +64,8 @@
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{

View File

@@ -104,8 +104,8 @@
"\n",
"import boto3\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"from langchain_community.llms import SagemakerEndpoint\n",
"from langchain_community.llms.sagemaker_endpoint import LLMContentHandler\n",
"from langchain_aws.llms import SagemakerEndpoint\n",
"from langchain_aws.llms.sagemaker_endpoint import LLMContentHandler\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"query = \"\"\"How long was Elizabeth hospitalized?\n",
@@ -174,8 +174,8 @@
"from typing import Dict\n",
"\n",
"from langchain.chains.question_answering import load_qa_chain\n",
"from langchain_community.llms import SagemakerEndpoint\n",
"from langchain_community.llms.sagemaker_endpoint import LLMContentHandler\n",
"from langchain_aws.llms import SagemakerEndpoint\n",
"from langchain_aws.llms.sagemaker_endpoint import LLMContentHandler\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"query = \"\"\"How long was Elizabeth hospitalized?\n",

View File

@@ -0,0 +1,14 @@
# Abso
[Abso](https://abso.ai/#router) is an open-source LLM proxy that automatically routes requests between fast and slow models based on prompt complexity. It uses various heuristics to chose the proper model. It's very fast and has low latency.
## Installation and setup
```bash
pip install langchain-abso
```
## Chat Model
See usage details [here](/docs/integrations/chat/abso)

View File

@@ -0,0 +1,82 @@
# ADS4GPTs
> [ADS4GPTs](https://www.ads4gpts.com/) is building the open monetization backbone of the AI-Native internet. It helps AI applications monetize through advertising with a UX and Privacy first approach.
## Installation and Setup
### Using pip
You can install the package directly from PyPI:
```bash
pip install ads4gpts-langchain
```
### From Source
Alternatively, install from source:
```bash
git clone https://github.com/ADS4GPTs/ads4gpts.git
cd ads4gpts/libs/python-sdk/ads4gpts-langchain
pip install .
```
## Prerequisites
- Python 3.11+
- ADS4GPTs API Key ([Obtain API Key](https://www.ads4gpts.com))
## Environment Variables
Set the following environment variables for API authentication:
```bash
export ADS4GPTS_API_KEY='your-ads4gpts-api-key'
```
Alternatively, API keys can be passed directly when initializing classes or stored in a `.env` file.
## Tools
ADS4GPTs provides two main tools for monetization:
### Ads4gptsInlineSponsoredResponseTool
This tool fetches native, sponsored responses that can be seamlessly integrated within your AI application's outputs.
```python
from ads4gpts_langchain import Ads4gptsInlineSponsoredResponseTool
```
### Ads4gptsSuggestedPromptTool
Generates sponsored prompt suggestions to enhance user engagement and provide monetization opportunities.
```python
from ads4gpts_langchain import Ads4gptsSuggestedPromptTool
```
### Ads4gptsInlineConversationalTool
Delivers conversational sponsored content that naturally fits within chat interfaces and dialogs.
```python
from ads4gpts_langchain import Ads4gptsInlineConversationalTool
```
### Ads4gptsInlineBannerTool
Provides inline banner advertisements that can be displayed within your AI application's response.
```python
from ads4gpts_langchain import Ads4gptsInlineBannerTool
```
### Ads4gptsSuggestedBannerTool
Generates banner advertisement suggestions that can be presented to users as recommended content.
```python
from ads4gpts_langchain import Ads4gptsSuggestedBannerTool
```
## Toolkit
The `Ads4gptsToolkit` combines these tools for convenient access in LangChain applications.
```python
from ads4gpts_langchain import Ads4gptsToolkit
```

View File

@@ -0,0 +1,35 @@
# AgentQL
[AgentQL](https://www.agentql.com/) provides web interaction and structured data extraction from any web page using an [AgentQL query](https://docs.agentql.com/agentql-query) or a Natural Language prompt. AgentQL can be used across multiple languages and web pages without breaking over time and change.
## Installation and Setup
Install the integration package:
```bash
pip install langchain-agentql
```
## API Key
Get an API Key from our [Dev Portal](https://dev.agentql.com/) and add it to your environment variables:
```
export AGENTQL_API_KEY="your-api-key-here"
```
## DocumentLoader
AgentQL's document loader provides structured data extraction from any web page using an AgentQL query.
```python
from langchain_agentql.document_loaders import AgentQLLoader
```
See our [document loader documentation and usage example](/docs/integrations/document_loaders/agentql).
## Tools and Toolkits
AgentQL tools provides web interaction and structured data extraction from any web page using an AgentQL query or a Natural Language prompt.
```python
from langchain_agentql.tools import ExtractWebDataTool, ExtractWebDataBrowserTool, GetWebElementBrowserTool
from langchain_agentql import AgentQLBrowserToolkit
```
See our [tools documentation and usage example](/docs/integrations/tools/agentql).

View File

@@ -14,20 +14,34 @@ blogs, or knowledge bases.
## Installation and Setup
- Install the Apify API client for Python with `pip install apify-client`
- Install the LangChain Apify package for Python with:
```bash
pip install langchain-apify
```
- Get your [Apify API token](https://console.apify.com/account/integrations) and either set it as
an environment variable (`APIFY_API_TOKEN`) or pass it to the `ApifyWrapper` as `apify_api_token` in the constructor.
an environment variable (`APIFY_API_TOKEN`) or pass it as `apify_api_token` in the constructor.
## Tool
## Utility
You can use the `ApifyActorsTool` to use Apify Actors with agents.
```python
from langchain_apify import ApifyActorsTool
```
See [this notebook](/docs/integrations/tools/apify_actors) for example usage and a full example of a tool-calling agent with LangGraph in the [Apify LangGraph agent Actor template](https://apify.com/templates/python-langgraph).
For more information on how to use this tool, visit [the Apify integration documentation](https://docs.apify.com/platform/integrations/langgraph).
## Wrapper
You can use the `ApifyWrapper` to run Actors on the Apify platform.
```python
from langchain_community.utilities import ApifyWrapper
from langchain_apify import ApifyWrapper
```
For more information on this wrapper, see [the API reference](https://python.langchain.com/api_reference/community/utilities/langchain_community.utilities.apify.ApifyWrapper.html).
For more information on how to use this wrapper, see [the Apify integration documentation](https://docs.apify.com/platform/integrations/langchain).
## Document loader
@@ -35,7 +49,10 @@ For more information on this wrapper, see [the API reference](https://python.lan
You can also use our `ApifyDatasetLoader` to get data from Apify dataset.
```python
from langchain_community.document_loaders import ApifyDatasetLoader
from langchain_apify import ApifyDatasetLoader
```
For a more detailed walkthrough of this loader, see [this notebook](/docs/integrations/document_loaders/apify_dataset).
Source code for this integration can be found in the [LangChain Apify repository](https://github.com/apify/langchain-apify).

View File

@@ -0,0 +1,59 @@
# Azure AI
All functionality related to [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/developer/python/get-started) and its related projects.
Integration packages for Azure AI, Dynamic Sessions, SQL Server are maintained in
the [langchain-azure](https://github.com/langchain-ai/langchain-azure) repository.
## Chat models
We recommend developers start with the (`langchain-azure-ai`) to access all the models available in [Azure AI Foundry](https://learn.microsoft.com/en-us/azure/ai-studio/how-to/model-catalog-overview).
### Azure AI Chat Completions Model
Access models like Azure OpenAI, DeepSeek R1, Cohere, Phi and Mistral using the `AzureAIChatCompletionsModel` class.
```bash
pip install -U langchain-azure-ai
```
Configure your API key and Endpoint.
```bash
export AZURE_INFERENCE_CREDENTIAL=your-api-key
export AZURE_INFERENCE_ENDPOINT=your-endpoint
```
```python
from langchain_azure_ai.chat_models import AzureAIChatCompletionsModel
llm = AzureAIChatCompletionsModel(
model_name="gpt-4o",
api_version="2024-05-01-preview",
)
llm.invoke('Tell me a joke and include some emojis')
```
## Embedding models
### Azure AI model inference for embeddings
```bash
pip install -U langchain-azure-ai
```
Configure your API key and Endpoint.
```bash
export AZURE_INFERENCE_CREDENTIAL=your-api-key
export AZURE_INFERENCE_ENDPOINT=your-endpoint
```
```python
from langchain_azure_ai.embeddings import AzureAIEmbeddingsModel
embed_model = AzureAIEmbeddingsModel(
model_name="text-embedding-ada-002"
)
```

View File

@@ -0,0 +1,27 @@
# Cognee
Cognee implements scalable, modular ECL (Extract, Cognify, Load) pipelines that allow
you to interconnect and retrieve past conversations, documents, and audio
transcriptions while reducing hallucinations, developer effort, and cost.
Cognee merges graph and vector databases to uncover hidden relationships and new
patterns in your data. You can automatically model, load and retrieve entities and
objects representing your business domain and analyze their relationships, uncovering
insights that neither vector stores nor graph stores alone can provide.
Try it in a Google Colab <a href="https://colab.research.google.com/drive/1g-Qnx6l_ecHZi0IOw23rg0qC4TYvEvWZ?usp=sharing">notebook</a> or have a look at the <a href="https://docs.cognee.ai">documentation</a>.
If you have questions, join cognee <a href="https://discord.gg/NQPKmU5CCg">Discord</a> community.
Have you seen cognee's <a href="https://github.com/topoteretes/cognee-starter">starter repo</a>? Check it out!
## Installation and Setup
```bash
pip install langchain-cognee
```
## Retrievers
See detail on available retrievers [here](/docs/integrations/retrievers/cognee).

View File

@@ -0,0 +1,110 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Contextual AI\n",
"\n",
"Contextual AI is a platform that offers state-of-the-art Retrieval-Augmented Generation (RAG) technology for enterprise applications. Our platformant models helps innovative teams build production-ready AI applications that can process millions of pages of documents with exceptional accuracy.\n",
"\n",
"## Grounded Language Model (GLM)\n",
"\n",
"The Grounded Language Model (GLM) is specifically engineered to minimize hallucinations in RAG and agentic applications. The GLM achieves:\n",
"\n",
"- State-of-the-art performance on the FACTS benchmark\n",
"- Responses strictly grounded in provided knowledge sources\n",
"\n",
"## Using Contextual AI with LangChain\n",
"\n",
"See details [here](/docs/integrations/chat/contextual).\n",
"\n",
"This integration allows you to easily incorporate Contextual AI's GLM into your LangChain workflows. Whether you're building applications for regulated industries or security-conscious environments, Contextual AI provides the grounded and reliable responses your use cases demand.\n",
"\n",
"Get started with a free trial today and experience the most grounded language model for enterprise AI applications."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "y8ku6X96sebl"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"According to the information available, there are two types of cats in the world:\n",
"\n",
"1. Good cats\n",
"2. Best cats\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"from langchain_contextual import ChatContextual\n",
"\n",
"# Set credentials\n",
"if not os.getenv(\"CONTEXTUAL_AI_API_KEY\"):\n",
" os.environ[\"CONTEXTUAL_AI_API_KEY\"] = getpass.getpass(\n",
" \"Enter your Contextual API key: \"\n",
" )\n",
"\n",
"# initialize Contextual llm\n",
"llm = ChatContextual(\n",
" model=\"v1\",\n",
" api_key=\"\",\n",
")\n",
"# include a system prompt (optional)\n",
"system_prompt = \"You are a helpful assistant that uses all of the provided knowledge to answer the user's query to the best of your ability.\"\n",
"\n",
"# provide your own knowledge from your knowledge-base here in an array of string\n",
"knowledge = [\n",
" \"There are 2 types of dogs in the world: good dogs and best dogs.\",\n",
" \"There are 2 types of cats in the world: good cats and best cats.\",\n",
"]\n",
"\n",
"# create your message\n",
"messages = [\n",
" (\"human\", \"What type of cats are there in the world and what are the types?\"),\n",
"]\n",
"\n",
"# invoke the GLM by providing the knowledge strings, optional system prompt\n",
"# if you want to turn off the GLM's commentary, pass True to the `avoid_commentary` argument\n",
"ai_msg = llm.invoke(\n",
" messages, knowledge=knowledge, system_prompt=system_prompt, avoid_commentary=True\n",
")\n",
"\n",
"print(ai_msg.content)"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.8"
}
},
"nbformat": 4,
"nbformat_minor": 1
}

View File

@@ -103,14 +103,7 @@ See [MLflow LangChain Integration](/docs/integrations/providers/mlflow_tracking)
SQLDatabase
-----------
You can connect to Databricks SQL using the SQLDatabase wrapper of LangChain.
```
from langchain.sql_database import SQLDatabase
db = SQLDatabase.from_databricks(catalog="samples", schema="nyctaxi")
```
See [Databricks SQL Agent](https://docs.databricks.com/en/large-language-models/langchain.html#databricks-sql-agent) for how to connect Databricks SQL with your LangChain Agent as a powerful querying tool.
To connect to Databricks SQL or query structured data, see the [Databricks structured retriever tool documentation](https://docs.databricks.com/en/generative-ai/agent-framework/structured-retrieval-tools.html#table-query-tool) and to create an agent using the above created SQL UDF see [Databricks UC Integration](https://docs.unitycatalog.io/ai/integrations/langchain/).
Open Models
-----------

View File

@@ -0,0 +1,16 @@
# Deeplake
[Deeplake](https://www.deeplake.ai/) is a database optimized for AI and deep learning
applications.
## Installation and Setup
```bash
pip install langchain-deeplake
```
## Vector stores
See detail on available vector stores
[here](/docs/integrations/vectorstores/activeloop_deeplake).

View File

@@ -0,0 +1,22 @@
# Dell
Dell is a global technology company that provides a range of hardware, software, and
services, including AI solutions. Their AI portfolio includes purpose-built
infrastructure for AI workloads, including Dell PowerScale storage systems optimized
for AI data management.
## PowerScale
Dell [PowerScale](https://www.dell.com/en-us/shop/powerscale-family/sf/powerscale) is
an enterprise scale out storage system that hosts industry leading OneFS filesystem
that can be hosted on-prem or deployed in the cloud.
### Installation and Setup
```bash
pip install powerscale-rag-connector
```
### Document loaders
See detail on available loaders [here](/docs/integrations/document_loaders/powerscale).

View File

@@ -0,0 +1,65 @@
# Discord
> [Discord](https://discord.com/) is an instant messaging, voice, and video communication platform widely used by communities of all types.
## Installation and Setup
Install the `langchain-discord-shikenso` package:
```bash
pip install langchain-discord-shikenso
```
You must provide a bot token via environment variable so the tools can authenticate with the Discord API:
```bash
export DISCORD_BOT_TOKEN="your-discord-bot-token"
```
If `DISCORD_BOT_TOKEN` is not set, the tools will raise a `ValueError` when instantiated.
---
## Tools
Below is a snippet showing how you can read and send messages in Discord. For more details, see the [documentation for Discord tools](/docs/integrations/tools/discord).
```python
from langchain_discord.tools.discord_read_messages import DiscordReadMessages
from langchain_discord.tools.discord_send_messages import DiscordSendMessage
# Create tool instances
read_tool = DiscordReadMessages()
send_tool = DiscordSendMessage()
# Example: Read the last 3 messages from channel 1234567890
read_result = read_tool({"channel_id": "1234567890", "limit": 3})
print(read_result)
# Example: Send a message to channel 1234567890
send_result = send_tool({"channel_id": "1234567890", "message": "Hello from Markdown example!"})
print(send_result)
```
---
## Toolkit
`DiscordToolkit` groups multiple Discord-related tools into a single interface. For a usage example, see [the Discord toolkit docs](/docs/integrations/tools/discord).
```python
from langchain_discord.toolkits import DiscordToolkit
toolkit = DiscordToolkit()
tools = toolkit.get_tools()
read_tool = tools[0] # DiscordReadMessages
send_tool = tools[1] # DiscordSendMessage
```
---
## Future Integrations
Additional integrations (e.g., document loaders, chat loaders) could be added for Discord.
Check the [Discord Developer Docs](https://discord.com/developers/docs/intro) for more information, and watch for updates or advanced usage examples in the [langchain_discord GitHub repo](https://github.com/Shikenso-Analytics/langchain-discord).

View File

@@ -1,4 +1,4 @@
# Discord
# Discord (community loader)
>[Discord](https://discord.com/) is a VoIP and instant messaging social platform. Users have the ability to communicate
> with voice calls, video calls, text messaging, media and files in private chats or as part of communities called

View File

@@ -27,5 +27,5 @@ from langchain_community.agent_toolkits.gitlab.toolkit import GitLabToolkit
Tool for interacting with the GitLab API.
```python
from langchain_community.tools.github.tool import GitHubAction
from langchain_community.tools.gitlab.tool import GitLabAction
```

Some files were not shown because too many files have changed in this diff Show More