Compare commits

...

273 Commits

Author SHA1 Message Date
Chester Curme
a85e0aed5f implement in core 2025-04-24 16:36:59 -04:00
ccurme
a7903280dd openai[patch]: delete redundant tests (#31004)
These are covered by standard tests.
2025-04-24 17:56:32 +00:00
Kyle Jeong
d0f0d1f966 [docs/community]: langchain docs + browserbaseloader fix (#30973)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"

community: fix browserbase integration
docs: update docs

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Updated BrowserbaseLoader to use the new python sdk.
    - **Issue:** update browserbase integration with langchain
    - **Dependencies:** n/a
    - **Twitter handle:** @kylejeong21

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-04-24 13:38:49 -04:00
ccurme
403fae8eec core: release 0.3.56 (#31000) 2025-04-24 13:22:31 -04:00
Jacob Lee
d6b50ad3f6 docs: Update Google Analytics tag in docs (#31001) 2025-04-24 10:19:10 -07:00
ccurme
10a9c24dae openai: fix streaming reasoning without summaries (#30999)
Following https://github.com/langchain-ai/langchain/pull/30909: need to
retain "empty" reasoning output when streaming, e.g.,
```python
{'id': 'rs_...', 'summary': [], 'type': 'reasoning'}
```
Tested by existing integration tests, which are currently failing.
2025-04-24 16:01:45 +00:00
ccurme
8fc7a723b9 core: release 0.3.56rc1 (#30998) 2025-04-24 15:09:44 +00:00
ccurme
f4863f82e2 core[patch]: fix edge cases for _is_openai_data_block (#30997) 2025-04-24 10:48:52 -04:00
Philipp Schmid
ae4b6380d9 Documentation: Add Google Gemini dropdown (#30995)
This PR adds Google Gemini (via AI Studio and Gemini API). Feel free to
change the ordering, if needed.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 10:00:16 -04:00
Philipp Schmid
ffbc64c72a Documentation: Improve structure of Google integrations page (#30992)
This PR restructures the main Google integrations documentation page
(`docs/docs/integrations/providers/google.mdx`) for better clarity and
updates content.

**Key changes:**

* **Separated Sections:** Divided integrations into distinct `Google
Generative AI (Gemini API & AI Studio)`, `Google Cloud`, and `Other
Google Products` sections.
* **Updated Generative AI:** Refreshed the introduction and the `Google
Generative AI` section with current information and quickstart examples
for the Gemini API via `langchain-google-genai`.
* **Reorganized Content:** Moved non-Cloud Platform specific
integrations (e.g., Drive, GMail, Search tools, ScaNN) to the `Other
Google Products` section.
* **Cleaned Up:** Minor improvements to descriptions and code snippets.

This aims to make it easier for users to find the relevant Google
integrations based on whether they are using the Gemini API directly or
Google Cloud services.

| Before                | After      |
|-----------------------|------------|
| ![Screenshot 2025-04-24 at 14 56
23](https://github.com/user-attachments/assets/ff967ec8-a833-4e8f-8015-61af8a4fac8b)
| ![Screenshot 2025-04-24 at 14 56
15](https://github.com/user-attachments/assets/179163f1-e805-484a-bbf6-99f05e117b36)
|

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 09:58:46 -04:00
Jacob Lee
6b0b317cb5 feat(core): Autogenerate filenames for when converting file content blocks to OpenAI format (#30984)
CC @ccurme

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-24 13:36:31 +00:00
ccurme
21962e2201 docs: temporarily disable milvus in API ref build (#30996) 2025-04-24 09:31:23 -04:00
Behrad Hemati
1eb0bdadfa community: add indexname to other functions in opensearch (#30987)
- [x] **PR title**: "community: add indexname to other functions in
opensearch"



- [x] **PR message**:
- **Description:** add ability to over-ride index-name if provided in
the kwargs of sub-functions. When used in WSGI application it's crucial
to be able to dynamically change parameters.


- [ ] **Add tests and docs**:


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2025-04-24 08:59:33 -04:00
Nicky Parseghian
7ecdac5240 community: Strip URLs from sitemap. (#30830)
Fixes #30829

- **Description:** Simply strips the loc value when building the
element.
    - **Issue:** Fixes #30829
2025-04-23 18:18:42 -04:00
ccurme
faef3e5d50 core, standard-tests: support PDF and audio input in Chat Completions format (#30979)
Chat models currently implement support for:
- images in OpenAI Chat Completions format
- other multimodal types (e.g., PDF and audio) in a cross-provider
[standard
format](https://python.langchain.com/docs/how_to/multimodal_inputs/)

Here we update core to extend support to PDF and audio input in Chat
Completions format. **If an OAI-format PDF or audio content block is
passed into any chat model, it will be transformed to the LangChain
standard format**. We assume that any chat model supporting OAI-format
PDF or audio has implemented support for the standard format.
2025-04-23 18:32:51 +00:00
Bagatur
d4fc734250 core[patch]: update dict prompt template (#30967)
Align with JS changes made in
https://github.com/langchain-ai/langchainjs/pull/8043
2025-04-23 10:04:50 -07:00
ccurme
4bc70766b5 core, openai: support standard multi-modal blocks in convert_to_openai_messages (#30968) 2025-04-23 11:20:44 -04:00
ccurme
e4877e5ef1 fireworks: release 0.3.0 (#30977) 2025-04-23 10:08:38 -04:00
Christophe Bornet
8c5ae108dd text-splitters: Set strict mypy rules (#30900)
* Add strict mypy rules
* Fix mypy violations
* Add error codes to all type ignores
* Add ruff rule PGH003
* Bump mypy version to 1.15
2025-04-22 20:41:24 -07:00
ccurme
eedda164c6 fireworks[minor]: remove default model and temperature (#30965)
`mixtral-8x-7b-instruct` was recently retired from Fireworks Serverless.

Here we remove the default model altogether, so that the model must be
explicitly specified on init:
```python
ChatFireworks(model="accounts/fireworks/models/llama-v3p1-70b-instruct")  # for example
```

We also set a null default for `temperature`, which previously defaulted
to 0.0. This parameter will no longer be included in request payloads
unless it is explicitly provided.
2025-04-22 15:58:58 -04:00
Grant
4be55f7c89 docs: fix typo at 175 (#30966)
**Description:** Corrected pre-buit to pre-built.
**Issue:** Little typo.
2025-04-22 18:13:07 +00:00
CLOVA Studio 개발
577cb53a00 community: update Naver integration to use langchain-naver package and improve documentation (#30956)
## **Description:** 
This PR was requested after the `langchain-naver` partner-managed
packages were completed.
We build our package as requested in [this
comment](https://github.com/langchain-ai/langchain/pull/29243#issuecomment-2595222791)
and the initial version is now uploaded to
[pypi](https://pypi.org/project/langchain-naver/).
So we've updated some our documents with the additional changed features
and how to download our partner-managed package.

## **Dependencies:** 

https://github.com/langchain-ai/langchain/pull/29243#issuecomment-2595222791

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-22 12:00:10 -04:00
ccurme
a7c1bccd6a openai[patch]: remove xfails from image token counting tests (#30963)
These appear to be passing again.
2025-04-22 15:55:33 +00:00
ccurme
25d77aa8b4 community: release 0.3.22 (#30962) 2025-04-22 15:34:47 +00:00
ccurme
59fd4cb4c0 docs: update package registry sort order (#30960) 2025-04-22 15:27:32 +00:00
ccurme
b8c454b42b langchain: release 0.3.24 (#30959) 2025-04-22 11:23:34 -04:00
Dmitrii Rashchenko
a43df006de Support of openai reasoning summary streaming (#30909)
**langchain_openai: Support of reasoning summary streaming**

**Description:**
OpenAI API now supports streaming reasoning summaries for reasoning
models (o1, o3, o3-mini, o4-mini). More info about it:
https://platform.openai.com/docs/guides/reasoning#reasoning-summaries

It is supported only in Responses API (not Completion API), so you need
to create LangChain Open AI model as follows to support reasoning
summaries streaming:

```
llm = ChatOpenAI(
    model="o4-mini", # also o1, o3, o3-mini support reasoning streaming
    use_responses_api=True,  # reasoning streaming works only with responses api, not completion api
    model_kwargs={
        "reasoning": {
            "effort": "high",  # also "low" and "medium" supported
            "summary": "auto"  # some models support "concise" summary, some "detailed", but auto will always work
        }
    }
)
```

Now, if you stream events from llm:

```
async for event in llm.astream_events(prompt, version="v2"):
    print(event)
```

or

```
for chunk in llm.stream(prompt):
    print (chunk)
```

OpenAI API will send you new types of events:
`response.reasoning_summary_text.added`
`response.reasoning_summary_text.delta`
`response.reasoning_summary_text.done`

These events are new, so they were ignored. So I have added support of
these events in function `_convert_responses_chunk_to_generation_chunk`,
so reasoning chunks or full reasoning added to the chunk
additional_kwargs.

Example of how this reasoning summary may be printed:

```
    async for event in llm.astream_events(prompt, version="v2"):
        if event["event"] == "on_chat_model_stream":
            chunk: AIMessageChunk = event["data"]["chunk"]
            if "reasoning_summary_chunk" in chunk.additional_kwargs:
                print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
            elif "reasoning_summary" in chunk.additional_kwargs:
                print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
            elif chunk.content and chunk.content[0]["type"] == "text":
                print(chunk.content[0]["text"], end="")
```

or

```
    for chunk in llm.stream(prompt):
        if "reasoning_summary_chunk" in chunk.additional_kwargs:
            print(chunk.additional_kwargs["reasoning_summary_chunk"], end="")
        elif "reasoning_summary" in chunk.additional_kwargs:
            print("\n\nFull reasoning step summary:", chunk.additional_kwargs["reasoning_summary"])
        elif chunk.content and chunk.content[0]["type"] == "text":
            print(chunk.content[0]["text"], end="")
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-22 14:51:13 +00:00
Alexander Ng
0f6fa34372 Community: Valyu Integration docs (#30926)
PR title:
docs: add Valyu integration documentation
Description:
This PR adds documentation and example notebooks for the Valyu
integration, including retriever and tool usage.
Issue:
N/A
Dependencies:
No new dependencies.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-21 17:43:00 -04:00
Jacob Mansdorfer
e8a84b05a4 Community: Adding tool calling and some new parameters to the langchain-predictionguard docs. (#30953)
- [x] **PR message**: 
- **Description:** Updates the documentation for the
langchain-predictionguard package, adding tool calling functionality and
some new parameters.
2025-04-21 17:01:57 -04:00
ccurme
8574442c57 core[patch]: release 0.3.55 (#30952) 2025-04-21 17:56:24 +00:00
ccurme
920d504e47 fireworks[patch]: update model in LLM integration tests (#30951)
`mixtral-8x7b-instruct` has been retired.
2025-04-21 17:53:27 +00:00
Anton Masalovich
1f3054502e community: fix cost calculations for 4.1 and o4 in OpenAI callback (#30899)
**Issue:** #30898
2025-04-21 10:59:47 -04:00
Ahmed Tammaa
589bc19890 anthropic[patch]: make description optional on AnthropicTool (#30935)
PR Summary

This change adds a fallback in ChatAnthropic.with_structured_output() to
handle Pydantic models that don’t include a docstring. Without it,
calling:
```py
from pydantic import BaseModel
from langchain_anthropic import ChatAnthropic

class SampleModel(BaseModel):
    sample_field: str

llm = ChatAnthropic(
    model="claude-3-7-sonnet-latest"
).with_structured_output(SampleModel.model_json_schema())

llm.invoke("test")
```
will raise a
```
KeyError: 'description'
```
because Pydantic omits the description field when no docstring is
present.

This issue doesn’t occur when using ChatOpenAI or if you add a docstring
to the model:
```py
from pydantic import BaseModel
from langchain_openai import ChatOpenAI

class SampleModel(BaseModel):
    """Schema for sample_field output."""
    sample_field: str

llm = ChatOpenAI(
    model="gpt-4o-mini"
).with_structured_output(SampleModel.model_json_schema())

llm.invoke("test")
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 10:44:39 -04:00
Nuno Campos
27296bdb0c core: Make Graph.Node.data optional (#30943)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-21 07:18:36 -07:00
Pushpa Kumar
0e9d0dbc10 docs: dynamic copyright year in API ref (#30944)
- [x] **PR title:**  
`docs: make footer year dynamic in API reference docs`

- [x] **PR message:**

  - **Description:**  
Update `docs/api_reference/conf.py` to make the copyright year dynamic
(on
[https://python.langchain.com/api_reference/](https://python.langchain.com/api_reference/)).

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 14:10:14 +00:00
Ahmed Tammaa
de56c31672 core: Improve OutputParser error messaging when model output is truncated (max_tokens) (#30936)
Addresses #30158
When using the output parser—either in a chain or standalone—hitting
max_tokens triggers a misleading “missing variable” error instead of
indicating the output was truncated. This subtle bug often surfaces with
Anthropic models.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-21 10:06:18 -04:00
xsai9101
335f089d6a Community: Add bind variable support for oracle adb docloader (#30937)
PR title:
Community: Add bind variable support for oracle adb docloader
Description:
This PR adds support of using bind variable to oracle adb doc loader
class, including minor document change.
Issue:
N/A
Dependencies:
No new dependencies.
2025-04-21 08:47:33 -04:00
Ikko Eltociear Ashimine
9418c0d8a5 docs: update tableau.ipynb (#30938)
Initalize -> Initialize
2025-04-21 08:43:29 -04:00
Aubrey Ford
23f701b08e langchain_community: OpenAIEmbeddings not respecting chunk_size argument (#30946)
This is a follow-on PR to go with the identical changes that were made
in parters/openai.

Previous PR:  https://github.com/langchain-ai/langchain/pull/30757

When calling embed_documents and providing a chunk_size argument, that
argument is ignored when OpenAIEmbeddings is instantiated with its
default configuration (where check_embedding_ctx_length=True).

_get_len_safe_embeddings specifies a chunk_size parameter but it's not
being passed through in embed_documents, which is its only caller. This
appears to be an oversight, especially given that the
_get_len_safe_embeddings docstring states it should respect "the set
embedding context length and chunk size."

Developers typically expect method parameters to take effect (also, take
precedence) when explicitly provided, especially when instantiating
using defaults. I was confused as to why my API calls were being
rejected regardless of the chunk size I provided.
2025-04-21 08:39:07 -04:00
Aubrey Ford
b344f34635 partners/openai: OpenAIEmbeddings not respecting chunk_size argument (#30757)
When calling `embed_documents` and providing a `chunk_size` argument,
that argument is ignored when `OpenAIEmbeddings` is instantiated with
its default configuration (where `check_embedding_ctx_length=True`).

`_get_len_safe_embeddings` specifies a `chunk_size` parameter but it's
not being passed through in `embed_documents`, which is its only caller.
This appears to be an oversight, especially given that the
`_get_len_safe_embeddings` docstring states it should respect "the set
embedding context length and chunk size."

Developers typically expect method parameters to take effect (also, take
precedence) when explicitly provided, especially when instantiating
using defaults. I was confused as to why my API calls were being
rejected regardless of the chunk size I provided.

This bug also exists in langchain_community package. I can add that to
this PR if requested otherwise I will create a new one once this passes.
2025-04-18 15:27:27 -04:00
Konsti-s
017c8079e1 partners: ChatAnthropic supports urls (#30809)
**Description:**
partners-anthropic: ChatAnthropic supports b64 and urls in the
part[image_url][url] message variable

**Issue**:
ChatAnthropic right now only supports b64 encoded images in the
part[image_url][url] message variable. This PR enables ChatAnthropic to
also accept image urls in said variable and makes it compatible with
OpenAI messages to make model switching easier.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-18 15:15:45 -04:00
Volodymyr Tkachuk
d0cd115356 community: Add deprecation decorator to SingleStore community integrations (#30846)
SingleStore integration now has its package `langchain-singlestore', so
the community implementation will no longer be maintained.

Added `deprecated` decorator to `SingleStoreDBChatMessageHistory`,
`SingleStoreDBSemanticCache`, and `SingleStoreDB` classes in the
community package.

**Dependencies:** https://github.com/langchain-ai/langchain/pull/30841

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-18 12:58:39 -04:00
Alejandro Rodríguez
34ddfba76b community: support usage_metadata for litellm streaming calls (#30683)
Support "usage_metadata" for LiteLLM streaming calls.

This is a follow-up to
https://github.com/langchain-ai/langchain/pull/30625, which tackled
non-streaming calls.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-18 12:50:32 -04:00
Volodymyr Tkachuk
5ffcd01c41 docs: Register langchain-singlestore integration (#30841)
I created and published `langchain-singlestoe` integration package that
should replace SingleStoreDB community implementation.
2025-04-18 12:11:33 -04:00
ccurme
096f0e5966 core[patch]: de-beta usage callback (#30928) 2025-04-18 15:45:09 +00:00
ccurme
46de0866db infra: add langchain-google-genai to monorepo test deps and update notebook cassettes (#30925)
Following https://github.com/langchain-ai/langchain/pull/30880
2025-04-18 11:16:12 -04:00
Behrad Hemati
d624a475e4 community: change metadata in opensearch mmr (#30921)
- [ ] **PR message**:
- **Description:** including metadata_field in
max_marginal_relevance_search() would result in error, changed the logic
to be similar to how it's handled in similarity_search, where it can be
any field or simply a "*" to include every field
2025-04-18 10:10:23 -04:00
rylativity
dbf9986d44 langchain-ollama (partners) / langchain-core: allow passing ChatMessages to Ollama (including arbitrary roles) (#30411)
Replacement for PR #30191 (@ccurme)

**Description**: currently, ChatOllama [will raise a value error if a
ChatMessage is passed to
it](https://github.com/langchain-ai/langchain/blob/master/libs/partners/ollama/langchain_ollama/chat_models.py#L514),
as described
https://github.com/langchain-ai/langchain/pull/30147#issuecomment-2708932481.

Furthermore, ollama-python is removing the limitations on valid roles
that can be passed through chat messages to a model in ollama -
https://github.com/ollama/ollama-python/pull/462#event-16917810634.

This PR removes the role limitations imposed by langchain and enables
passing langchain ChatMessages with arbitrary 'role' values through the
langchain ChatOllama class to the underlying ollama-python Client.

As this PR relies on [merged but unreleased functionality in
ollama-python](
https://github.com/ollama/ollama-python/pull/462#event-16917810634), I
have temporarily pointed the ollama package source to the main branch of
the ollama-python github repo.

Format, lint, and tests of new functionality passing. Need to resolve
issue with recently added ChatOllama tests. (Now resolved)

**Issue**: resolves #30122 (related to ollama issue
https://github.com/ollama/ollama/issues/8955)

**Dependencies**: no new dependencies

[x] PR title
[x] PR message
[x] Lint and test: format, lint, and test all running successfully and
passing

---------

Co-authored-by: Ryan Stewart <ryanstewart@Ryans-MacBook-Pro.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-18 10:07:07 -04:00
Christophe Bornet
0c723af4b0 langchain[lint]: fix mypy type ignores (#30894)
* Remove unused ignores
* Add type ignore codes
* Add mypy rule `warn_unused_ignores`
* Add ruff rule PGH003

NB: some `type: ignore[unused-ignore]` are added because the ignores are
needed when `extended_testing_deps.txt` deps are installed.
2025-04-17 17:54:34 -04:00
ccurme
f14bcee525 docs: update multi-modal docs (#30880)
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-04-17 16:03:05 -04:00
Sydney Runkle
98c357b3d7 core: release 0.3.54 (#30911) 2025-04-17 14:27:06 -04:00
Vadym Barda
d2cbfa379f core[patch]: add retries and better messages to draw_mermaid_png (#30881) 2025-04-17 18:25:37 +00:00
Sydney Runkle
75e50a3efd core[patch]: Raise AttributeError (instead of ModuleNotFoundError) in custom __getattr__ (#30905)
Follow up to https://github.com/langchain-ai/langchain/pull/30769,
fixing the regression reported
[here](https://github.com/langchain-ai/langchain/pull/30769#issuecomment-2807483610),
thanks @krassowski for the report!

Fix inspired by https://github.com/PrefectHQ/prefect/pull/16172/files

Other changes:
* Using tuples for `__all__`, except in `output_parsers` bc of a list
namespace conflict
* Using a helper function for imports due to repeated logic across
`__init__.py` files becoming hard to maintain.

Co-authored-by: Michał Krassowski < krassowski 5832902+krassowski@users.noreply.github.com>"
2025-04-17 14:15:28 -04:00
ccurme
61d2dc011e openai: release 0.3.14 (#30908) 2025-04-17 10:49:14 -04:00
ccurme
f0f90c4d88 anthropic: release 0.3.12 (#30907) 2025-04-17 14:45:12 +00:00
ccurme
f01b89df56 standard-tests: release 0.3.19 (#30906) 2025-04-17 10:37:44 -04:00
ccurme
add6a78f98 standard-tests, openai[patch]: add support standard audio inputs (#30904) 2025-04-17 10:30:57 -04:00
ccurme
2c2db1ab69 core: release 0.3.53 (#30901) 2025-04-17 13:10:32 +00:00
ccurme
86d51f6be6 multiple: permit optional fields on multimodal content blocks (#30887)
Instead of stuffing provider-specific fields in `metadata`, they can go
directly on the content block.
2025-04-17 12:48:46 +00:00
湛露先生
83b66cb916 doc: clean doc word description. (#30895)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-17 08:04:37 -04:00
湛露先生
ff2930c119 partners: bug fix check_imports.py exit code. (#30897)
Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-17 08:02:23 -04:00
ccurme
b36c2bf833 docs: update Bedrock chat model page (#30883)
- document prompt caching
- feature ChatBedrockConverse throughout
2025-04-16 16:55:14 -04:00
ccurme
9e82f1df4e docs: minor clean up in ChatOpenAI docs (#30884) 2025-04-16 16:08:43 -04:00
ccurme
fa362189a1 docs: document OpenAI reasoning summaries (#30882) 2025-04-16 19:21:14 +00:00
Sydney Runkle
88fce67724 core: Removing unnecessary pydantic core schema rebuilds (#30848)
We only need to rebuild model schemas if type annotation information
isn't available during declaration - that shouldn't be the case for
these types corrected here.

Need to do more thorough testing to make sure these structures have
complete schemas, but hopefully this boosts startup / import time.
2025-04-16 12:00:08 -04:00
rrozanski-smabbler
60d8ade078 Galaxia integration (#30792)
- [ ] **PR title**: "docs: adding Smabbler's Galaxia integration"

- [ ] **PR message**:  **Twitter handle:** @Galaxia_graph

I'm adding docs here + added the package to the packages.yml. I didn't
add a unit test, because this integration is just a thin wrapper on top
of our API. There isn't much left to test if you mock it away.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-16 10:39:04 -04:00
ccurme
ca39680d2a ollama: release 0.3.2 (#30865) 2025-04-16 09:14:57 -04:00
Sydney Runkle
4af3f89a3a docs: enforce newlines when signature exceeds char threshold (#30866)
Below is an example of the single line vs new multiline approach.

Before this PR:

<img width="831" alt="Screenshot 2025-04-15 at 8 56 26 PM"
src="https://github.com/user-attachments/assets/0c0277bd-2441-4b22-a536-e16984fd91b7"
/>

After this PR:

<img width="829" alt="Screenshot 2025-04-15 at 8 56 13 PM"
src="https://github.com/user-attachments/assets/e16bfe38-bb17-48ba-a642-e8ff6b48e841"
/>
2025-04-16 08:45:40 -04:00
milosz-l
4ff576e37d langchain: infer Perplexity provider for sonar model prefix (#30861)
**Description:** This PR adds provider inference logic to
`init_chat_model` for Perplexity models that use the "sonar..." prefix
(`sonar`, `sonar-pro`, `sonar-reasoning`, `sonar-reasoning-pro` or
`sonar-deep-research`).

This allows users to initialize these models by simply passing the model
name, without needing to explicitly set `model_provider="perplexity"`.

The docstring for `init_chat_model` has also been updated to reflect
this new inference rule.
2025-04-15 18:17:21 -04:00
ccurme
085baef926 ollama[patch]: support standard image format (#30864)
Following https://github.com/langchain-ai/langchain/pull/30746
2025-04-15 22:14:50 +00:00
ccurme
47ded80b64 ollama[patch]: fix generation info (#30863)
https://github.com/langchain-ai/langchain/pull/30778 (not released)
broke all invocation modes of ChatOllama (intent was to remove
`"message"` from `generation_info`, but we turned `generation_info` into
`stream_resp["message"]`), resulting in validation errors.
2025-04-15 19:22:58 +00:00
Sydney Runkle
cf2697ec53 chroma: release 0.2.3 (#30860) 2025-04-15 14:11:23 -04:00
ccurme
8e9569cbc8 perplexity: release 0.1.1 (#30859) 2025-04-15 18:02:15 +00:00
ccurme
dd5f5902e3 openai: release 0.3.13 (#30858) 2025-04-15 17:58:12 +00:00
ccurme
3382ee8f57 anthropic: release 0.3.11 (#30857) 2025-04-15 17:57:00 +00:00
Sydney Runkle
ef5aff3b6c core[fix]: Fix __dir__ in __init__.py for output_parsers module (#30856)
We have a `list.py` file which causes a namespace conflict with `list`
from stdlib, unfortunately.

`__all__` is already a list, so no need to coerce.
2025-04-15 13:09:13 -04:00
Christophe Bornet
a4ca1fe0ed core: Remove some noqa (#30855) 2025-04-15 13:08:40 -04:00
ccurme
6baf5c05a6 standard-tests: release 0.3.18 (#30854) 2025-04-15 16:56:54 +00:00
ccurme
c6a8663afb infra: run old standard-tests on core releases (#30852)
On core releases, we check out the latest published package for
langchain-openai and langchain-anthropic and run their tests against the
candidate version of langchain-core.

Because these packages have a local install of langchain-tests, we also
need to check out the previous version of langchain-tests.
2025-04-15 16:04:08 +00:00
Sydney Runkle
1f5e207379 core[fix]: remove load from dynamic imports dict (#30849) 2025-04-15 12:02:46 -04:00
ccurme
7240458619 core: release 0.3.52 (#30850) 2025-04-15 15:28:31 +00:00
Sydney Runkle
6aa5494a75 Fix from langchain_core.load.load import load import (#30843)
TL;DR: you can't optimize imports with a lazy `__getattr__` if there is
a namespace conflict with a module name and an attribute name. We should
avoid introducing conflicts like this in the future.

This PR fixes a bug introduced by my lazy imports PR:
https://github.com/langchain-ai/langchain/pull/30769.

In `langchain_core`, we have utilities for loading and dumping data.
Unfortunately, one of those utilities is a `load` function, located in
`langchain_core/load/load.py`. To make this function more visible, we
make it accessible at the top level `langchain_core.load` module via
importing the function in `langchain_core/load/__init__.py`.

So, either of these imports should work:

```py
from langchain_core.load import load
from langchain_core.load.load import load
```

As you can tell, this is already a bit confusing. You'd think that the
first import would produce the module `load`, but because of the
`__init__.py` shortcut, both produce the function `load`.

<details> More on why the lazy imports PR broke this support...

All was well, except when the absolute import was run first, see the
last snippet:

```
>>> from langchain_core.load import load
>>> load
<function load at 0x101c320c0>
```

```
>>> from langchain_core.load.load import load
>>> load
<function load at 0x1069360c0>
```

```
>>> from langchain_core.load import load
>>> load
<function load at 0x10692e0c0>
>>> from langchain_core.load.load import load
>>> load
<function load at 0x10692e0c0>
```

```
>>> from langchain_core.load.load import load
>>> load
<function load at 0x101e2e0c0>
>>> from langchain_core.load import load
>>> load
<module 'langchain_core.load.load' from '/Users/sydney_runkle/oss/langchain/libs/core/langchain_core/load/load.py'>
```

In this case, the function `load` wasn't stored in the globals cache for
the `langchain_core.load` module (by the lazy import logic), so Python
defers to a module import.

</details>

New `langchain` tongue twister 😜: we've created a problem for ourselves
because you have to load the load function from the load file in the
load module 😨.
2025-04-15 11:06:13 -04:00
Bagatur
7262de4217 core[patch]: dict chat prompt template support (#25674)
- Support passing dicts as templates to chat prompt template
- Support making *any* attribute on a message a runtime variable
- Significantly simpler than trying to update our existing prompt
template classes

```python
    template = ChatPromptTemplate(
        [
            {
                "role": "assistant",
                "content": [
                    {
                        "type": "text",
                        "text": "{text1}",
                        "cache_control": {"type": "ephemeral"},
                    },
                    {"type": "image_url", "image_url": {"path": "{local_image_path}"}},
                ],
                "name": "{name1}",
                "tool_calls": [
                    {
                        "name": "{tool_name1}",
                        "args": {"arg1": "{tool_arg1}"},
                        "id": "1",
                        "type": "tool_call",
                    }
                ],
            },
            {
                "role": "tool",
                "content": "{tool_content2}",
                "tool_call_id": "1",
                "name": "{tool_name1}",
            },
        ]
    )

```

will likely close #25514 if we like this idea and update to use this
logic

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-15 11:00:49 -04:00
ccurme
9cfe6bcacd multiple: multi-modal content blocks (#30746)
Introduces standard content block format for images, audio, and files.

## Examples

Image from url:
```
{
    "type": "image",
    "source_type": "url",
    "url": "https://path.to.image.png",
}
```


Image, in-line data:
```
{
    "type": "image",
    "source_type": "base64",
    "data": "<base64 string>",
    "mime_type": "image/png",
}
```


PDF, in-line data:
```
{
    "type": "file",
    "source_type": "base64",
    "data": "<base64 string>",
    "mime_type": "application/pdf",
}
```


File from ID:
```
{
    "type": "file",
    "source_type": "id",
    "id": "file-abc123",
}
```


Plain-text file:
```
{
    "type": "file",
    "source_type": "text",
    "text": "foo bar",
}
```
2025-04-15 09:48:06 -04:00
湛露先生
09438857e8 docs: fix tools_human.ipynb url 404. (#30831)
Fix the 404 pages.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-15 09:22:13 -04:00
Sydney Runkle
e3b6cddd5e core: codspeed tweak to make sure it runs on master (#30845) 2025-04-15 13:03:44 +00:00
Sydney Runkle
59f2c9e737 Tinkering with CodSpeed (#30824)
Fix CI to trigger benchmarks on `run-codspeed-benchmarks` label addition

Reduce scope of async benchmark to save time on CI

Waiting to merge this PR until we figure out how to use walltime on
local runners.
2025-04-15 08:49:09 -04:00
William FH
ed5c4805f6 Consistent docstring indentation (#30834)
Should be 4 spaces instead of 3.
2025-04-14 19:04:35 -07:00
Joey Constantino
2282762528 docs: small Tableau docs update (#30827)
Description: small Tableau docs update
Issue: adds required environment variable
Dependencies: tableau-langchain

---------

Co-authored-by: Joe Constantino <joe.constantino@joecons-ltm6v86.internal.salesforce.com>
2025-04-14 15:34:54 -04:00
ccurme
f7c4965fb6 openai[patch]: update imports in test (#30828)
Quick fix to unblock CI, will need to address in core separately.
2025-04-14 19:33:38 +00:00
Sydney Runkle
edb6a23aea core[lint]: fix issue with unused ignore in __init__.py files (#30825)
Fixing a race condition between
https://github.com/langchain-ai/langchain/pull/30769 and
https://github.com/langchain-ai/langchain/pull/30737
2025-04-14 17:57:00 +00:00
湛露先生
3a64c7195f community: redis tool typos fix (#30811) 2025-04-14 09:01:36 -04:00
Sydney Runkle
4f69094b51 core[performance]: use custom __getattr__ in __init__.py files for lazy imports (#30769)
Most easily reviewed with the "hide whitespace" option toggled.

Seeing 10-50% speed ups in import time for common structures 🚀 

The general purpose of this PR is to lazily import structures within
`langchain_core.XXX_module.__init__.py` so that we're not eagerly
importing expensive dependencies (`pydantic`, `requests`, etc).

Analysis of flamegraphs generated with `importtime` motivated these
changes. For example, the one below demonstrates that importing
`HumanMessage` accidentally triggered imports for `importlib.metadata`,
`requests`, etc.

There's still much more to do on this front, and we can start digging
into our own internal code for optimizations now that we're less
concerned about external imports.

<img width="1210" alt="Screenshot 2025-04-11 at 1 10 54 PM"
src="https://github.com/user-attachments/assets/112a3fe7-24a9-4294-92c1-d5ae64df839e"
/>

I've tracked the improvements with some local benchmarks:

## `pytest-benchmark` results

| Name | Before (s) | After (s) | Delta (s) | % Change |

|-----------------------------|------------|-----------|-----------|----------|
| Document | 2.8683 | 1.2775 | -1.5908 | -55.46% |
| HumanMessage | 2.2358 | 1.1673 | -1.0685 | -47.79% |
| ChatPromptTemplate | 5.5235 | 2.9709 | -2.5526 | -46.22% |
| Runnable | 2.9423 | 1.7793 | -1.163 | -39.53% |
| InMemoryVectorStore | 3.1180 | 1.8417 | -1.2763 | -40.93% |
| RunnableLambda | 2.7385 | 1.8745 | -0.864 | -31.55% |
| tool | 5.1231 | 4.0771 | -1.046 | -20.42% |
| CallbackManager | 4.2263 | 3.4099 | -0.8164 | -19.32% |
| LangChainTracer | 3.8394 | 3.3101 | -0.5293 | -13.79% |
| BaseChatModel | 4.3317 | 3.8806 | -0.4511 | -10.41% |
| PydanticOutputParser | 3.2036 | 3.2995 | 0.0959 | 2.99% |
| InMemoryRateLimiter | 0.5311 | 0.5995 | 0.0684 | 12.88% |

Note the lack of change for `InMemoryRateLimiter` and
`PydanticOutputParser` is just random noise, I'm getting comparable
numbers locally.

## Local CodSpeed results

We're still working on configuring CodSpeed on CI. The local usage
produced similar results.
2025-04-14 08:57:54 -04:00
Christophe Bornet
ada740b5b9 community: Add ruff rule PGH003 (#30812)
See https://docs.astral.sh/ruff/rules/blanket-type-ignore/

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-14 02:32:13 +00:00
ccurme
f005988e31 community[patch]: fix cost calculations for o3 in OpenAI callback (#30807)
Resolves https://github.com/langchain-ai/langchain/issues/30795
2025-04-13 15:20:46 +00:00
BoyuHu
446361a0d3 docs: fix typo (#30800)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-13 10:55:30 -04:00
Marina Gómez
afd457d8e1 perplexity[patch]: Fix #30767: Handle missing citations attribute in ChatPerplexity (#30805)
This PR fixes an issue where ChatPerplexity would raise an
AttributeError when the citations attribute was missing from the model
response (e.g., when using offline models like r1-1776).

The fix checks for the presence of citations, images, and
related_questions before attempting to access them, avoiding crashes in
models that don't provide these fields.

Tested locally with models that omit citations, and the fix works as
expected.
2025-04-13 09:24:05 -04:00
Christophe Bornet
42944f3499 core: Improve mypy config (#30737)
* Cleanup mypy config
* Add mypy `strict` rules except `disallow_any_generics`,
`warn_return_any` and `strict_equality` (TODO)
* Add mypy `strict_byte` rule
* Add mypy support for PEP702 `@deprecated` decorator
* Bump mypy version to 1.15

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-11 16:35:13 -04:00
mpb159753
bb2c2fd885 docs: Add openGauss vector store documentation (#30742)
Hey LangChain community! 👋 Excited to propose official documentation for
our new openGauss integration that brings powerful vector capabilities
to the stack!

### What's Inside 📦  
1. **Full Integration Guide**  
Introducing
[langchain-opengauss](https://pypi.org/project/langchain-opengauss/) on
PyPI - your new toolkit for:
   🔍 Native hybrid search (vectors + metadata)  
   🚀 Production-grade connection pooling  
   🧩 Automatic schema management  

2. **Rigorous Testing Passed**   
![Benchmark
Results](https://github.com/user-attachments/assets/ae3b21f7-aeea-4ae7-a142-f2aec57936a0)
   - 100% non-async test coverage  

ps: Current implementation resides in my personal repository:
https://github.com/mpb159753/langchain-opengauss, How can I transfer
process to langchain-ai org?? *Keen to hear your thoughts and make this
integration shine!* 

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-11 20:31:39 +00:00
Christophe Bornet
913c896598 core: Add ruff rules FBT001 and FBT002 (#30695)
Add ruff rules
[FBT001](https://docs.astral.sh/ruff/rules/boolean-type-hint-positional-argument/)
and
[FBT002](https://docs.astral.sh/ruff/rules/boolean-default-value-positional-argument/).
Mostly `noqa`s to not introduce breaking changes and possible
non-breaking fixes have already been done in a [previous
PR](https://github.com/langchain-ai/langchain/pull/29424).
These rules will prevent new violations to happen.
2025-04-11 16:26:33 -04:00
William FH
2803a48661 core[patch]: Share executor for async callbacks run in sync context (#30779)
To avoid having to create ephemeral threads, grab the thread lock, etc.
2025-04-11 10:34:43 -07:00
Sydney Runkle
fdc2b4bcac core[lint]: Use 3.9 formatting for docs and tests (#30780)
Looks like `pyupgrade` was already used here but missed some docs and
tests.

This helps to keep our docs looking professional and up to date.
Eventually, we should lint / format our inline docs.
2025-04-11 10:39:25 -04:00
Sydney Runkle
48affc498b langchain[lint]: use pyupgrade to get to 3.9 standards (#30782) 2025-04-11 10:33:26 -04:00
ccurme
d9b628e764 xai: release 0.2.3 (#30790) 2025-04-11 14:05:11 +00:00
ccurme
9cfb95e621 xai[patch]: support reasoning content (#30758)
https://docs.x.ai/docs/guides/reasoning

```python
from langchain.chat_models import init_chat_model

llm = init_chat_model(
    "xai:grok-3-mini-beta",
    reasoning_effort="low"
)
response = llm.invoke("Hello, world!")
```
2025-04-11 14:00:27 +00:00
Christophe Bornet
89f28a24d3 core[lint]: Fix typing in test_async_callbacks (#30788) 2025-04-11 07:26:38 -04:00
Sydney Runkle
8c6734325b partners[lint]: run pyupgrade to get code in line with 3.9 standards (#30781)
Using `pyupgrade` to get all `partners` code up to 3.9 standards
(mostly, fixing old `typing` imports).
2025-04-11 07:18:44 -04:00
Jacob Lee
e72f3c26a0 fix(ollama): Remove redundant message from response_metadata (#30778) 2025-04-10 23:12:57 -07:00
Jannik Maierhöfer
f3c3ec9aec docs: add langfuse integration to provider list (#30573)
This PR adds the Langfuse integration to the provider list.
2025-04-10 22:25:42 -04:00
Christophe Bornet
dc19d42d37 core: Specify code when ignoring type issue (ruff PGH003) (#30675)
See https://docs.astral.sh/ruff/rules/blanket-type-ignore/
2025-04-10 22:23:52 -04:00
Paul Czarkowski
68d16d8a07 Community: Add Managed Identity support for Azure AI Search (#30730)
Add Managed Identity support for Azure AI Search

---------

Signed-off-by: Paul Czarkowski <username.taken@gmail.com>
2025-04-10 22:22:58 -04:00
CtrlMj
5103594a2c replace the deprecated initialize_agent in playwright.ipynb with create_react_agent (#30734)
**Description:** Replaced the example with the deprecated
`intialize_agent` function with `create_react_agent` from
`langgraph.prebuild`

**Issue:** #29277 

**Dependencies:** N/A
**Twitter handle:** N/A
2025-04-10 22:12:12 -04:00
Eugene Yurtsev
e42b3d285a langchain: remove langchain-server script (#30755)
Has been replaced by langsmith a long long time ago
2025-04-10 22:11:42 -04:00
Pol de Font-Réaulx
48cf7c838d feat(community): add oauth2 support for Jira toolkit (#30684)
**Description:** add support for oauth2 in Jira tool by adding the
possibility to pass a dictionary with oauth parameters. I also adapted
the documentation to show this new behavior
2025-04-10 22:04:09 -04:00
Oleg Ovcharuk
b6fe7e8c10 docs: YDB Vector Store docs (#30636)
This PR adds docs about how to use YDB as a vector store

[YDB](https://ydb.tech/) is a versatile open-source distributed SQL
database. It supports [vector
search](https://ydb.tech/docs/en/yql/reference/udf/list/knn) which means
it can be used as a vector store with langchain.

YDB vectore store comes with
[langchain-ydb](https://pypi.org/project/langchain-ydb/) pypi package.

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-10 21:33:56 -04:00
湛露先生
7a4ae6fbff community[patch]: simplify cache logic (#30760)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-10 19:20:57 -04:00
ccurme
8e053ac9d2 core[patch]: support customization of backoff parameters in with_retries (#30773)
Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-04-10 19:18:36 -04:00
amohan
e981a9810d docs: update links in cloudflare docs (#30776)
Thanks for reviewing again. I was notified of some
[links](https://python.langchain.com/api_reference/cloudflare/) not
being correct in the default integrations so updating them in this PR.
2025-04-10 19:08:18 -04:00
William FH
70532a65f8 Async callback benchmark (#30777) 2025-04-10 15:47:19 -07:00
Sydney Runkle
c6172d167a Only run CodSpeed benchmarks with run-codspeed-benchmarks label (#30774) 2025-04-10 15:48:14 -04:00
amohan
f70df01e01 docs: Update ordering of cloudflare integration examples in providers page (#30768)
Updated the ordering of cloudflare integrations and updated import
examples. Follow up from
https://github.com/langchain-ai/langchain/pull/30749
2025-04-10 15:34:58 -04:00
Sydney Runkle
8f8fea2d7e [performance]: Use hard coded langchain-core version to avoid importlib import (#30744)
This PR aims to reduce import time of `langchain-core` tools by removing
the `importlib.metadata` import previously used in `__init__.py`. This
is the first in a sequence of PRs to reduce import time delays for
`langchain-core` features and structures 🚀.

Because we're now hard coding the version, we need to make sure
`version.py` and `pyproject.toml` stay in sync, so I've added a new CI
job that runs whenever either of those files are modified. [This
run](https://github.com/langchain-ai/langchain/actions/runs/14358012706/job/40251952044?pr=30744)
demonstrates the failure that occurs whenever the version gets out of
sync (thus blocking a PR).

Before, note the ~15% of time spent on the `importlib.metadata` /related
imports

<img width="1081" alt="Screenshot 2025-04-09 at 9 06 15 AM"
src="https://github.com/user-attachments/assets/59f405ec-ee8d-4473-89ff-45dea5befa31"
/>

After (note, lack of `importlib.metadata` time sink):

<img width="1245" alt="Screenshot 2025-04-09 at 9 01 23 AM"
src="https://github.com/user-attachments/assets/9c32e77c-27ce-485e-9b88-e365193ed58d"
/>
2025-04-10 14:15:02 -04:00
Sydney Runkle
cd6a83117c Adding more import time benchmarks for langchain-core (#30770)
Plus minor typo fix in `ChatPromptTemplate` case id.
2025-04-10 11:50:12 -04:00
Chamath K.B. Attanayaka
6c45c9efc3 docs: update clickhouse version in notebook example (#30754)
update clickhouse docker version tag in notebook example to avoid
compatibility issues with clickhouse-connect.
2025-04-10 09:51:54 -04:00
amohan
44b83460b2 docs: Add Cloudflare integrations (#30749)
Description:
This PR adds documentation for the langchain-cloudflare integration
package.

Issue:
N/A

Dependencies:
No new dependencies are required.

Tests and Docs:

Added an example notebook demonstrating the usage of the
langchain-cloudflare package, located in docs/docs/integrations.
Added a new package to libs/packages.yml.

Lint and Format:

Successfully ran make format and make lint.

---------

Co-authored-by: Collier King <collier@cloudflare.com>
Co-authored-by: Collier King <collierking99@gmail.com>
2025-04-10 09:27:23 -04:00
湛露先生
c87a270e5f cookbook: Fix docs typos. (#30763)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-10 09:13:24 -04:00
ccurme
63c16f5ca8 community: deprecate AzureCosmosDBNoSqlVectorSearch in favor of langchain-azure-ai implementation (#30756) 2025-04-09 21:04:16 +00:00
Christophe Bornet
4cc7bc6c93 core: Add ruff rules PLR (#30696)
Add ruff rules [PLR](https://docs.astral.sh/ruff/rules/#refactor-plr)
Except PLR09xxx and PLR2004.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-09 15:15:38 -04:00
célina
68361f9c2d partners: (langchain-huggingface) Embeddings - Integrate Inference Providers and remove deprecated code (#30735)
Hi there, This is a complementary PR to #30733.
This PR introduces support for Hugging Face's serverless Inference
Providers (documentation
[here](https://huggingface.co/docs/inference-providers/index)), allowing
users to specify different providers

This PR also removes the usage of `InferenceClient.post()` method in
`HuggingFaceEndpointEmbeddings`, in favor of the task-specific
`feature_extraction` method. `InferenceClient.post()` is deprecated and
will be removed in `huggingface_hub` v0.31.0.

## Changes made

- bumped the minimum required version of the `huggingface_hub` package
to ensure compatibility with the latest API usage.
- added a provider field to `HuggingFaceEndpointEmbeddings`, enabling
users to select the inference provider.
- replaced the deprecated `InferenceClient.post()` call in
`HuggingFaceEndpointEmbeddings` with the task-specific
`feature_extraction` method for future-proofing, `post()` will be
removed in `huggingface-hub` v0.31.0.

 All changes are backward compatible.

---------

Co-authored-by: Lucain <lucainp@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-09 19:05:43 +00:00
Christophe Bornet
98f0016fc2 core: Add ruff rules ARG (#30732)
See https://docs.astral.sh/ruff/rules/#flake8-unused-arguments-arg
2025-04-09 14:39:36 -04:00
Sydney Runkle
66758599a9 [ci]: Quick codspeed.yml tweaks to enable comparisons with master (#30752)
* Only run codspeed logic when `libs/core` is changed (for now, we'll
want to add other benchmarks later
* Also run on `master` so that we can get a reference :)
2025-04-09 13:13:49 -04:00
theosaurus
d47d6ecbc3 dosc: Fix typo in get_separators_for_language method section (#30748)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-09 13:03:01 -04:00
Sydney Runkle
78ec7d886d [performance]: Adding benchmarks for common langchain-core imports (#30747)
The first in a sequence of PRs focusing on improving performance in
core. We're starting with reducing import times for common structures,
hence the benchmarks here.

The benchmark looks a little bit complicated - we have to use a process
so that we don't suffer from Python's import caching system. I tried
doing manual modification of `sys.modules` between runs, but that's
pretty tricky / hacky to get right, hence the subprocess approach.

Motivated by extremely slow baseline for common imports (we're talking
2-5 seconds):

<img width="633" alt="Screenshot 2025-04-09 at 12 48 12 PM"
src="https://github.com/user-attachments/assets/994616fe-1798-404d-bcbe-48ad0eb8a9a0"
/>

Also added a `make benchmark` command to make local runs easy :).
Currently using walltimes so that we can track total time despite using
a manual proces.
2025-04-09 13:00:15 -04:00
German Molina
5fb261ce27 community: Google Vertex AI Search now returns the website title as part of the document metadata (#30688)
Google vertex ai search will now return the title of the found website
as part of the document metadata, if available.

Thank you for contributing to LangChain!

- **Description**: Vertex AI Search can be used to index websites and
then develop chatbots that use these websites to answer questions. At
present, the document metadata includes an `id` and `source` (which is
the URL). While the URL is enough to create a link, the ID is not
descriptive enough to show users. Therefore, I propose we return `title`
as well, when available (e.g., it will not be available in `.txt`
documents found during the website indexing).
- **Issue**: No bug in particular, but it would be better if this was
here.
- **Dependencies**: None
- I do not use twitter.

Format, Lint and Test seem to be all good.
2025-04-09 08:54:06 -04:00
theosaurus
636d831d27 docs: Fix typo in 'Query re-writing" section (#30736)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-09 08:50:14 -04:00
giulia_p_lib
deec538335 docs: fix small typo in map_rerank_docs_chain.ipynb (#30738)
- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** fixed a minor typo in map_rerank_docs_chain.ipynb
2025-04-09 08:49:37 -04:00
Akshay Dongare
164e606cae docs: fix import path and update LiteLLM integration docs (#30685)
- [x] **PR title**: "docs: Update import path and LiteLLM integration
docs"
- Update the old import path for `ChatLiteLLM` to reflect the new export
from
[`__init__.py`](https://github.com/Akshay-Dongare/langchain-litellm/blob/main/langchain_litellm/__init__.py)
in
[`langchain-litellm`](https://github.com/Akshay-Dongare/langchain-litellm)
package

- [x] **PR message**:
    - **Description:** 
    - 🔗 **Follow-up to**: PR #30637
    - 🔧 **Fixes**: #30368
- 💬 **Based on this comment from** @ccurme:
https://github.com/langchain-ai/langchain/pull/30637#discussion_r2029084320
 


- [x] **About me**
🔗 LinkedIn:
[akshay-dongare](https://www.linkedin.com/in/akshay-dongare/)
2025-04-08 13:04:17 -04:00
Ikko Eltociear Ashimine
5686fed40b docs: update yellowbrick.ipynb (#30729)
retreival -> retrieval
2025-04-08 11:56:35 -04:00
Sydney Runkle
4556b81b1d Clean up numpy dependencies and speed up 3.13 CI with numpy>=2.1.0 (#30714)
Generally, this PR is CI performance focused + aims to clean up some
dependencies at the same time.

1. Unpins upper bounds for `numpy` in all `pyproject.toml` files where
`numpy` is specified
2. Requires `numpy >= 2.1.0` for Python 3.13 and `numpy > v1.26.0` for
Python 3.12, plus a `numpy` min version bump for `chroma`
3. Speeds up CI by minutes - linting on Python 3.13, installing `numpy <
2.1.0` was taking [~3
minutes](https://github.com/langchain-ai/langchain/actions/runs/14316342925/job/40123305868?pr=30713),
now the entire env setup takes a few seconds
4. Deleted the `numpy` test dependency from partners where that was not
used, specifically `huggingface`, `voyageai`, `xai`, and `nomic`.

It's a bit unfortunate that `langchain-community` depends on `numpy`, we
might want to try to fix that in the future...

Closes https://github.com/langchain-ai/langchain/issues/26026
Fixes https://github.com/langchain-ai/langchain/issues/30555
2025-04-08 09:45:07 -04:00
ccurme
163730aef4 docs: update SQL QA prompt (#30728)
Resolves https://github.com/langchain-ai/langchain/issues/30724

The [prompt in
langchain-hub](https://smith.langchain.com/hub/langchain-ai/sql-query-system-prompt)
used in this guide was composed of just a system message, but the guide
did not add a human message to it. This was incompatible with some
providers (and is generally not a typical usage pattern).

The prompt in prompt hub has been updated to split the question into a
separate HumanMessage. Here we update the guide to reflect this.
2025-04-08 09:42:49 -04:00
湛露先生
9cbe91896e Fix deepseek release tag, as it is update name. (#30717)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: zhanluxianshen <zhanluxianshen@163.com>
2025-04-08 08:43:16 -04:00
Nithish Raghunandanan
893942651b docs: Update couchbase vector store docs (#30710)
-  **Update LangChain-Couchbase documentation**
- Rename `CouchbaseVectorStore` in favor of `CouchbaseSearchVectorStore`

- [x] **Lint and test**
2025-04-07 18:45:14 -04:00
Eugene Yurtsev
3ce0587199 ci: remove unused debug action (#30713)
Removing an unused action
2025-04-07 22:32:37 +00:00
ccurme
a2bec5f2e5 ollama: release 0.3.1 (#30716) 2025-04-07 20:31:25 +00:00
ccurme
e3f15f0a47 ollama[patch]: add model_name to response metadata (#30706)
Fixes [this standard
test](https://python.langchain.com/api_reference/standard_tests/integration_tests/langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.html#langchain_tests.integration_tests.chat_models.ChatModelIntegrationTests.test_usage_metadata).
2025-04-07 16:27:58 -04:00
ccurme
e106e9602f groq[patch]: add retries to integration tests (#30707)
Tool-calling tests started intermittently failing with
> groq.APIError: Failed to call a function. Please adjust your prompt.
See 'failed_generation' for more details.
2025-04-07 12:45:53 -04:00
aaronlaitner
4f9f97bd12 docs: replaced initialize_agent with create_react_agent in dalle_image_generator.ipynb (#30697)
## Description:

Replaced deprecated 'initialize_agent' with 'create_react_agent' in
dalle_image_generator.ipynb
## Issue:
#29277

## Dependencies:
None

## Twitter handle:
@Thatopman

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-07 13:33:52 +00:00
Mohammad Mohtashim
e935da0b12 ChatTongyi reasoning_content fix (#30694)
- **Description:** Small fix for `reasoning_content` key
- **Issue:** #30689
2025-04-07 09:27:33 -04:00
Tin Lai
4d03ba4686 langchain_qdrant: fix showing the missing sparse vector name (#30701)
**Description:** The error message was supposed to display the missing
vector name, but instead, it includes only the existing collection
configs.

This simple PR just includes the correct variable name, so that the user
knows the requested vector does not exist in the collection.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.

Signed-off-by: Tin Lai <tin@tinyiu.com>
2025-04-07 09:19:08 -04:00
Eugene Yurtsev
30af9b8166 Update bug-report.yml (#30680)
Update bug report template!
2025-04-04 16:47:50 -04:00
jessicaou
2712ecffeb Update Contributor Docs (#30679)
Deletes statement on integration marketing
2025-04-04 16:35:11 -04:00
Ninad Sinha
a3671ceb71 docs: Add tools for hyperbrowser (#30606)
Description

This PR updates the docs for the
[langchain-hyperbrowser](https://pypi.org/project/langchain-hyperbrowser/)
package. It adds a few tools
 - Scrape Tool
 - Crawl Tool
 - Extract Tool
 - Browser Agents
   - Claude Computer Use
   - OpenAI CUA
   - Browser Use 

[Hyperbrowser](https://hyperbrowser.ai/) is a platform for running and
scaling headless browsers. It lets you launch and manage browser
sessions at scale and provides easy to use solutions for any webscraping
needs, such as scraping a single page or crawling an entire site.

Issue
None

Dependencies
None

Twitter Handle
`@hyperbrowser`
2025-04-04 16:02:47 -04:00
Christophe Bornet
6650b94627 core: Add ruff rules PYI (#29335)
See https://docs.astral.sh/ruff/rules/#flake8-pyi-pyi

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-04 19:59:44 +00:00
Philippe PRADOS
d8e3b7667f community[patch]: Fix empty producer in PDF Parsers (#30620)
Fix an issue where if a pdf file doesn't have a “producer” in metadata, it generates an exception.
2025-04-04 15:53:49 -04:00
Christophe Bornet
f0159c7125 core: Add ruff rules PGH (except PGH003) (#30656)
Add ruff rules PGH: https://docs.astral.sh/ruff/rules/#pygrep-hooks-pgh
Except PGH003 which will be dealt in a dedicated PR.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2025-04-04 19:53:27 +00:00
Jorge Ángel Juárez Vázquez
2491237473 docs: Add Google Calendar documentation (#30633)
## Docs: Add Google Calendar Toolkit Documentation

### Description:
This PR adds documentation for the Google Calendar Toolkit as part of
the `langchain-google` repository. Refer to the related PR: [community:
Add Google Calendar
Toolkit](https://github.com/langchain-ai/langchain-google/pull/688).

### Issue:
N/A 

### Twitter handle:
@jorgejrzz
2025-04-04 15:53:03 -04:00
Armaanjeet Singh Sandhu
7c2468f36b core: Fix handler removal in BaseCallbackManager (Fixes #30640) (#30659)
**Description:**  
Fixed a bug in `BaseCallbackManager.remove_handler()` that caused a
`ValueError` when removing a handler added via the constructor's
`handlers` parameter. The issue occurred because handlers passed to the
constructor were added only to the `handlers` list and not automatically
to `inheritable_handlers` unless explicitly specified. However,
`remove_handler()` attempted to remove the handler from both lists
unconditionally, triggering a `ValueError` when it wasn't in
`inheritable_handlers`.

The fix ensures the method checks for the handler’s presence in each
list before attempting removal, making it more robust while preserving
its original behavior.

**Issue:** Fixes #30640

**Dependencies:** None

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-04 15:45:15 -04:00
Mohammad Mohtashim
bff56c5fa6 community[patch]: Redundant Parser checker for Webbaseloader (#30632)
- **Description:** We do not need to set parser in `scrape` since it is
already been done in `_scrape`
- **Issue:** #30629, not directly related but makes sure xml parser is
used
2025-04-04 14:11:26 -04:00
Christophe Bornet
150ac0cb79 core: Add ruff rules DTZ (#30657)
Add ruff rules DTZ:
https://docs.astral.sh/ruff/rules/#flake8-datetimez-dtz
2025-04-04 13:43:47 -04:00
Christophe Bornet
5e418c2666 core: Rework pydantic version checks (#30653)
This pull request includes various changes to the `langchain_core`
library, focusing on improving compatibility with different versions of
Pydantic. The primary change involves replacing checks for Pydantic
major versions with boolean flags, which simplifies the code and
improves readability.
This also solves ruff rule checks for
[RUF048](https://docs.astral.sh/ruff/rules/map-int-version-parsing/) and
[PLR2004](https://docs.astral.sh/ruff/rules/magic-value-comparison/).

Key changes include:

### Compatibility Improvements:
*
[`libs/core/langchain_core/output_parsers/json.py`](diffhunk://#diff-5add0cf7134636ae4198a1e0df49ee332ae0c9123c3a2395101e02687c717646L22-R24):
Replaced `PYDANTIC_MAJOR_VERSION` with `IS_PYDANTIC_V1` to check for
Pydantic version 1.
*
[`libs/core/langchain_core/output_parsers/pydantic.py`](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L14-R14):
Updated version checks from `PYDANTIC_MAJOR_VERSION` to `IS_PYDANTIC_V2`
in the `PydanticOutputParser` class.
[[1]](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L14-R14)
[[2]](diffhunk://#diff-2364b5b4aee01c462aa5dbda5dc3a877dcd20f29df173ad540dc8adf8b192361L27-R27)

### Utility Enhancements:
*
[`libs/core/langchain_core/utils/pydantic.py`](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R23):
Introduced `IS_PYDANTIC_V1` and `IS_PYDANTIC_V2` flags and deprecated
the `get_pydantic_major_version` function. Updated various functions to
use these flags instead of version numbers.
[[1]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R23)
[[2]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896R42-R78)
[[3]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L90-R89)
[[4]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L104-R101)
[[5]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L120-R122)
[[6]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L135-R132)
[[7]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L149-R151)
[[8]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L164-R161)
[[9]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L248-R250)
[[10]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L330-R335)
[[11]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L356-R357)
[[12]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L393-R390)
[[13]](diffhunk://#diff-ff28020c5f1073a8b63bcd9d8b756a187fd682cb81935295120c63b207071896L403-R400)

### Test Updates:
*
[`libs/core/tests/unit_tests/output_parsers/test_openai_tools.py`](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L19-R22):
Updated tests to use `IS_PYDANTIC_V1` and `IS_PYDANTIC_V2` for version
checks.
[[1]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L19-R22)
[[2]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L532-R535)
[[3]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L567-R570)
[[4]](diffhunk://#diff-694cc0318edbd6bbca34f53304934062ad59ba9f5a788252ce6c5f5452489d67L602-R605)
*
[`libs/core/tests/unit_tests/prompts/test_chat.py`](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84R7):
Replaced version tuple checks with `PYDANTIC_VERSION` comparisons.
[[1]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84R7)
[[2]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L35-R38)
[[3]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L924-R927)
[[4]](diffhunk://#diff-3e60e744842086a4f3c4b21bc83e819c3435720eab210078e77e2430fb8c7e84L935-R938)
*
[`libs/core/tests/unit_tests/runnables/test_graph.py`](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dR3):
Simplified version checks using `PYDANTIC_VERSION`.
[[1]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dR3)
[[2]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dL15-R18)
[[3]](diffhunk://#diff-99a290330ef40103d0ce02e52e21310d6fadea142bfdea13c94d23fc81c0bb5dL234-L239)
*
[`libs/core/tests/unit_tests/runnables/test_runnable.py`](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L18-R20):
Introduced `PYDANTIC_VERSION_AT_LEAST_29` and
`PYDANTIC_VERSION_AT_LEAST_210` for more readable version checks.
[[1]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L18-R20)
[[2]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L92-R99)
[[3]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L230-R233)
[[4]](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L652-R655)
2025-04-04 13:42:30 -04:00
Christophe Bornet
43b5dc7191 core: Add ruff rules TD and FIX (#30654)
Add ruff rules:
* FIX: https://docs.astral.sh/ruff/rules/#flake8-fixme-fix
* TD: https://docs.astral.sh/ruff/rules/#flake8-todos-td

Code cleanup:

*
[`libs/core/langchain_core/outputs/chat_generation.py`](diffhunk://#diff-a1017ee46f58fa4005b110ffd4f8e1fb08f6a2a11d6ca4c78ff8be641cbb89e5L56-R56):
Removed the "HACK" prefix from a comment in the `set_text` method.

Configuration adjustments:

*
[`libs/core/pyproject.toml`](diffhunk://#diff-06baaee12b22a370fef9f170c9ed13e2727e377d3b32f5018430f4f0a39d3537R85-R93):
Added new rules `FIX002`, `TD002`, and `TD003` to the ignore list.
*
[`libs/core/pyproject.toml`](diffhunk://#diff-06baaee12b22a370fef9f170c9ed13e2727e377d3b32f5018430f4f0a39d3537L102-L108):
Removed the `FIX` and `TD` rules from the ignore list.

Test refinement:

*
[`libs/core/tests/unit_tests/runnables/test_runnable.py`](diffhunk://#diff-06bed920c0dad0cfd41d57a8d9e47a7b56832409649c10151061a791860d5bb5L3231-R3232):
Updated a TODO comment to improve clarity in the `test_map_stream`
function.
2025-04-04 13:40:42 -04:00
ccurme
a007c57285 docs: update package registry sort order (#30677) 2025-04-04 13:12:39 -04:00
Sydney Runkle
33ed7c31da docs: fix perplexity install instructions in ChatPerplexity docstring (#30676)
* `openai` install no longer needs to be done manually
2025-04-04 12:58:18 -04:00
Dhruvajyoti Sarma
f9bb5ec5d0 feature: removed pandas dataframe dependency for similary_search when using DuckDB as vector store (#30445)
- [ ] **PR title**: "community: Removes pandas dependency for using
DuckDB for similarity search"


- [ ] **PR message**: 
- **Description:** Removes pandas dependency for using DuckDB for
similarity search. The old function still exists as
`similarity_search_pd`, while the new one is at `similarity_search` and
requires no code changes. Return format remains the same.
    - **Issue:** Issue #29933 and update on PR #30435 
    - **Dependencies:** No dependencies
2025-04-04 12:19:18 -04:00
Akshay Dongare
f79473b752 Solved issue Implement langchain-litellm #30368 (#30637)
**PR title**: 
- [x] 1. docs: docs/docs/integrations/providers/LiteLLM.md
- [x] 2. docs: docs/docs/integrations/chat/litellm.ipynb
- [x] 3. libs: libs/packages.yml

- [x] **PR message**:
    - **Description:** Implement langchain-litellm
    - **Issue:** the issue #30368 
    - **Twitter handle:** akshay_d02
    - **LinkedIn Handle** https://linkedin.com/in/akshay-dongare 

- [x] **Add tests and docs**: Done

- [x] **Lint and test**: Done

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-04 16:12:10 +00:00
Yiğit Bekir Kaya, PhD
87e82fe1e8 Added langchain-qwq package documentation (Alibaba Cloud) (#30628)
LangChain QwQ allows non-Tongyi users to access thinking models with
extra capabilities which serve as an extension to Alibaba Cloud.

Hi @ccurme I'm back with the updated PR this time with documentation and
a finished package.

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"



- **Description:** adds documentation of `langchain-qwq` integration
package. Also adds it to Alibaba Cloud provider
- **Issue:** #30580 #30317 #30579
- **Dependencies:** openai, json-repair
- **Twitter handle:** YigitBekir


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-04 11:47:14 -04:00
Andrew Benton
4e7a9a7014 community: Add support for custom runtimes to Riza tools (#30664)
**Description:**
Adds support for Riza custom runtimes to the two Riza code interpreter
tools, allowing users to run LLM-generated code that depends on
libraries outside stdlib.
**Issue:** N/A
**Dependencies:** None
**Twitter handle:** @rizaio
2025-04-04 11:03:14 -04:00
diego dupin
aa37893c00 MariaDB vector store documentation addition (#30229)
### New Feature

Since version 11.7.1, MariaDB support vector. This is a super fast
implementation (see [some perf
blog](https://smalldatum.blogspot.com/2025/01/evaluating-vector-indexes-in-mariadb.html)
The goal is to support MariaDB with langchain

Implementation is done in
https://github.com/mariadb-corporation/langchain-mariadb, published in
https://pypi.org/project/langchain-mariadb/

This concerns the doc addition
 

(initial PR https://github.com/langchain-ai/langchain/pull/29989)

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Oskar Stark <oskarstark@googlemail.com>
2025-04-04 14:56:25 +00:00
Sydney Runkle
1cdea6ab07 langchain-community: release 0.3.21 (#30673) 2025-04-04 14:14:50 +00:00
Sydney Runkle
901dffe06b langchain: release 0.3.23 (#30670)
* Bump `text-splitters` min version
* Bump `langchain-core` min version
* Bump `langchain` version 🚀
2025-04-04 10:06:29 -04:00
ccurme
0c2c8c36c1 text-splitters: release 0.3.8 (#30671) 2025-04-04 09:58:45 -04:00
ccurme
59d508a2ee openai[patch]: make computer test more reliable (#30672) 2025-04-04 13:53:59 +00:00
Sydney Runkle
c235328b39 Revert "update langchain version and bump min core v"
This reverts commit d0f154dbaa.
2025-04-04 09:31:51 -04:00
Sydney Runkle
d0f154dbaa update langchain version and bump min core v 2025-04-04 09:27:49 -04:00
Sydney Runkle
32cd70d7d2 release: bump core to v0.3.51 (#30668) 2025-04-04 13:23:09 +00:00
Max Forsey
18cf457eec langchain-runpod integration (#30648)
## Description:

This PR adds the necessary documentation for the `langchain-runpod`
partner package integration. It includes:

* A provider page (`docs/docs/integrations/providers/runpod.ipynb`)
explaining the overall setup.
* An LLM component page (`docs/docs/integrations/llms/runpod.ipynb`)
detailing the `RunPod` class usage.
* A Chat Model component page
(`docs/docs/integrations/chat/runpod.ipynb`) detailing the `ChatRunPod`
class usage, including a feature support table.

These documentation files reflect the latest features of the
`langchain-runpod` package (v0.2.0+) such as async support and API
polling logic.

This work also addresses the review feedback provided on the previous
attempt in PR #30246 by:
*   Removing all TODOs from documentation.
*   Adding the required links between provider and component pages.
*   Completing the feature support table in the chat documentation.
*   Linking to the source code on GitHub for API reference.

Finally, it registers the `langchain-runpod` package in
`libs/packages.yml`.

## Dependencies:

None added to the core LangChain repository by these documentation
changes. The required dependency (`langchain-runpod`) is managed as a
separate package.

## Twitter handle:

@runpod_io

---------

Co-authored-by: Max Forsey <maxpod@maxpod.local>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 23:57:06 +00:00
NikeHop
9c03cd5775 Fix tool description in serpapi.ipynb (#30660)
Thank you for contributing to LangChain!

- [x] Fix Tool description of SerpAPI tool: "docs: Fix SerpAPI tool
description"



- [ ] Fix SerpAPI tool description: 
- Tool description + name in example initialization of the SerpAPI tool
was still that of the python repl tool.
    - @RLHoeppi

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 23:36:29 +00:00
Sydney Runkle
af66ab098e Adding Perplexity extra and deprecating the community version of ChatPerplexity (#30649)
Plus, some accompanying docs updates

Some compelling usage:

```py
from langchain_perplexity import ChatPerplexity


chat = ChatPerplexity(model="llama-3.1-sonar-small-128k-online")
response = chat.invoke(
    "What were the most significant newsworthy events that occurred in the US recently?",
    extra_body={"search_recency_filter": "week"},
)
print(response.content)
# > Here are the top significant newsworthy events in the US recently: ...
```

Also, some confirmation of structured outputs:

```py
from langchain_perplexity import ChatPerplexity
from pydantic import BaseModel


class AnswerFormat(BaseModel):
    first_name: str
    last_name: str
    year_of_birth: int
    num_seasons_in_nba: int


messages = [
    {"role": "system", "content": "Be precise and concise."},
    {
        "role": "user",
        "content": (
            "Tell me about Michael Jordan. "
            "Please output a JSON object containing the following fields: "
            "first_name, last_name, year_of_birth, num_seasons_in_nba. "
        ),
    },
]

llm = ChatPerplexity(model="llama-3.1-sonar-small-128k-online")
structured_llm = llm.with_structured_output(AnswerFormat)
response = structured_llm.invoke(messages)
print(repr(response))
#> AnswerFormat(first_name='Michael', last_name='Jordan', year_of_birth=1963, num_seasons_in_nba=15)
```
2025-04-03 14:29:17 -04:00
ccurme
b8929e3d5f docs: add image generation example to Google genai docs (#30650) 2025-04-03 14:21:54 -04:00
ccurme
374769e8fe core[patch]: log information from certain errors (#30626)
Some exceptions raised by SDKs include information in httpx responses
(see for example
[OpenAI](https://github.com/openai/openai-python/blob/main/src/openai/_exceptions.py)).
Here we trace information from those exceptions.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2025-04-03 16:45:19 +00:00
Sydney Runkle
17a9cd61e9 Bump langchain-core version in perplexity's pyproject.toml (#30647)
Blocking v0.1.0 release of `langchain-perplexity`
2025-04-03 16:19:10 +00:00
Sydney Runkle
3814bd1ea7 partners: Add Perplexity Chat Integration (#30618)
Perplexity's importance in the space has been growing, so we think it's
time to add an official integration!

Note: following the release of `langchain-perplexity` to `pypi`, we
should be able to add `perplexity` as an extra in
`libs/langchain/pyproject.toml`, but we're blocked by a circular import
for now.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-04-03 16:09:14 +00:00
vgrfl
87c02a1aff docs: Fixed a typo in 'Google AI vs Google Cloud Vertex AI' section (#30642)
**Description:** Corrected 'encription' spelling to 'encryption'
2025-04-03 09:04:29 -04:00
Alejandro Rodríguez
884125e129 community: support usage_metadata for litellm (#30625)
Support "usage_metadata" for LiteLLM. 

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-02 19:45:15 -04:00
Jacob Lee
01d0cfe450 docs: Remove TODO from Ollama docs page (#30627) 2025-04-02 22:59:15 +00:00
Christophe Bornet
f241fd5c11 core: Add ruff rules RET (#29384)
See https://docs.astral.sh/ruff/rules/#flake8-return-ret
All auto-fixes
2025-04-02 16:59:56 -04:00
Eugene Yurtsev
9ae792f56c core: 0.3.50 release (#30623)
0.3.50 release
2025-04-02 14:46:23 -04:00
Christophe Bornet
ccc3d32ec8 core: Add ruff rules for Pylint PLC (Convention) and PLE (Errors) (#29286)
See https://docs.astral.sh/ruff/rules/#pylint-pl
2025-04-02 10:58:03 -04:00
ccurme
fe0fd9dd70 openai[patch]: upgrade tiktoken and fix test (#30621)
Related to https://github.com/langchain-ai/langchain/issues/30344

https://github.com/langchain-ai/langchain/pull/30542 introduced an
erroneous test for token counts for o-series models. tiktoken==0.8 does
not support o-series models in
`tiktoken.encoding_for_model(model_name)`, and this is the version of
tiktoken we had in the lock file. So we would default to `cl100k_base`
for o-series, which is the wrong encoding model. The test tested against
this wrong encoding (so it passed with tiktoken 0.8).

Here we update tiktoken to 0.9 in the lock file, and fix the expected
counts in the test. Verified that we are pulling
[o200k_base](https://github.com/openai/tiktoken/blob/main/tiktoken/model.py#L8),
as expected.
2025-04-02 10:44:48 -04:00
oxy-tg
38807871ec docs: Add Oxylabs integration (#30591)
Description:
This PR adds documentation for the langchain-oxylabs integration
package.

The documentation includes instructions for configuring Oxylabs
credentials and provides example code demonstrating how to use the
package.

Issue:
N/A

Dependencies:
No new dependencies are required.

Tests and Docs:

Added an example notebook demonstrating the usage of the
Langchain-Oxylabs package, located in docs/docs/integrations.
Added a provider page in docs/docs/providers.
Added a new package to libs/packages.yml.

Lint and Test:

Successfully ran make format, make lint, and make test.
2025-04-02 14:40:32 +00:00
ccurme
816492e1d3 openai: release 0.3.12 (#30616) 2025-04-02 13:20:15 +00:00
Bagatur
111dd90a46 openai[patch]: support structured output and tools (#30581)
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-02 09:14:02 -04:00
Karol Zmorski
32f7695809 docs: Little update in sample notebook with WatsonxToolkit (#30614)
**Description:**

- Updated sample notebook with valid tools.
2025-04-02 09:08:29 -04:00
Mahir Shah
9d3262c7aa core: Propagate config_factories in RunnableBinding (#30603)
- **Description:** Propagates config_factories when calling decoration
methods for RunnableBinding--e.g. bind, with_config, with_types,
with_retry, and with_listeners. This ensures that configs attached to
the original RunnableBinding are kept when creating the new
RunnableBinding and the configs are merged during invocation. Picks up
where #30551 left off.
  - **Issue:** #30531

Co-authored-by: ccurme <chester.curme@gmail.com>
2025-04-01 18:03:58 -04:00
ccurme
8a69de5c24 openai[patch]: ignore file blocks when counting tokens (#30601)
OpenAI does not appear to document how it transforms PDF pages to
images, which determines how tokens are counted:
https://platform.openai.com/docs/guides/pdf-files?api-mode=chat#usage-considerations

Currently these block types raise ValueError inside
`get_num_tokens_from_messages`. Here we update to generate a warning and
continue.
2025-04-01 15:29:33 -04:00
Christophe Bornet
558191198f core: Add ruff rule FBT003 (boolean-trap) (#29424)
See
https://docs.astral.sh/ruff/rules/boolean-positional-value-in-call/#boolean-positional-value-in-call-fbt003
This PR also fixes some FBT001/002 in private methods but does not
enforce these rules globally atm.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-01 17:40:12 +00:00
Christophe Bornet
4f8ea13cea core: Add ruff rules PERF (#29375)
See https://docs.astral.sh/ruff/rules/#perflint-perf
2025-04-01 13:34:56 -04:00
Christophe Bornet
8a33402016 core: Add ruff rules PT (pytest) (#29381)
See https://docs.astral.sh/ruff/rules/#flake8-pytest-style-pt
2025-04-01 13:31:07 -04:00
Ben Faircloth
6896c863e8 docs: add seekrflow chat model integration docs (#30596)
### **PR title**  
`docs: add SeekrFlow integration notebook`

---

### 💬 **PR message**  
- **Description:**  
This PR adds an integration notebook for
[`[ChatSeekrFlow](https://pypi.org/project/langchain-seekrflow/)`](https://pypi.org/project/langchain-seekrflow/)
under `docs/docs/integrations/chat/`. Per LangChain’s guidance,
SeekrFlow has been published as a standalone OSS package
(`langchain-seekrflow`) rather than as a direct community integration.
This notebook ensures discoverability, demonstration, and testability of
the integration within LangChain’s documentation structure.

- **Issue:**  
N/A – this is a new integration contribution aligned with LangChain’s
external package policy.

- **Dependencies:**  
-
[`[langchain-seekrflow](https://pypi.org/project/langchain-seekrflow)`](https://pypi.org/project/langchain-seekrflow)
(published to PyPI)
-
[`[seekrai](https://pypi.org/project/seekrai/)`](https://pypi.org/project/seekrai/)
(SeekrFlow client SDK)

- **Twitter handle (optional):**  
  @seekrtechnology

---------

Co-authored-by: Ben Faircloth <bfaircloth@seekr.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-04-01 13:18:01 -04:00
Christophe Bornet
768e4f695a core: Add ruff rules S110 and S112 (#30599) 2025-04-01 13:17:22 -04:00
Christophe Bornet
88b4233fa1 core: Add ruff rules D (docstring) (#29406)
This ensures that the code is properly documented:
https://docs.astral.sh/ruff/rules/#pydocstyle-d

Related to #21983
2025-04-01 13:15:45 -04:00
Andras L Ferenczi
64df60e690 community[minor]: Add custom sitemap URL parameter to GitbookLoader (#30549)
## Description
This PR adds a new `sitemap_url` parameter to the `GitbookLoader` class
that allows users to specify a custom sitemap URL when loading content
from a GitBook site. This is particularly useful for GitBook sites that
use non-standard sitemap file names like `sitemap-pages.xml` instead of
the default `sitemap.xml`.
The standard `GitbookLoader` assumes that the sitemap is located at
`/sitemap.xml`, but some GitBook instances (including GitBook's own
documentation) use different paths for their sitemaps. This parameter
makes the loader more flexible and helps users extract content from a
wider range of GitBook sites.
## Issue
Fixes bug
[30473](https://github.com/langchain-ai/langchain/issues/30473) where
the `GitbookLoader` would fail to find pages on GitBook sites that use
custom sitemap URLs.
## Dependencies
No new dependencies required.
*I've added*:
* Unit tests to verify the parameter works correctly
* Integration tests to confirm the parameter is properly used with real
GitBook sites
* Updated docstrings with parameter documentation
The changes are fully backward compatible, as the parameter is optional
with a sensible default.

---------

Co-authored-by: andrasfe <andrasf94@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2025-04-01 16:17:21 +00:00
Christophe Bornet
fdda1aaea1 core: Accept ALL ruff rules with exclusions (#30595)
This pull request updates the `pyproject.toml` configuration file to
modify the linting rules and ignored warnings for the project. The most
important changes include switching to a more comprehensive selection of
linting rules and updating the list of ignored rules to better align
with the project's requirements.

Linting rules update:

* Changed the `select` option to include all available linting rules by
setting it to `["ALL"]`.

Ignored rules update:

* Updated the `ignore` option to include specific rules that interfere
with the formatter, are incompatible with Pydantic, or are temporarily
excluded due to project constraints.
2025-04-01 11:17:51 -04:00
Kacper Włodarczyk
26a3256fc6 community[major]: DynamoDBChatMessageHistory bulk add messages, raise errors (#30572)
This PR addresses two key issues:

- **Prevent history errors from failing silently**: Previously, errors
in message history were only logged and not raised, which can lead to
inconsistent state and downstream failures (e.g., ValidationError from
Bedrock due to malformed message history). This change ensures that such
errors are raised explicitly, making them easier to detect and debug.
(Side note: I’m using AWS Lambda Powertools Logger but hadn’t configured
it properly with the standard Python logger—my bad. If the error had
been raised, I would’ve seen it in the logs 😄) This is a **BREAKING
CHANGE**

- **Add messages in bulk instead of iteratively**: This introduces a
custom add_messages method to add all messages at once. The previous
approach failed silently when individual messages were too large,
resulting in partial history updates and inconsistent state. With this
change, either all messages are added successfully, or none are—helping
avoid obscure history-related errors from Bedrock.

---------

Co-authored-by: Kacper Wlodarczyk <kacper.wlodarczyk@chaosgears.com>
2025-04-01 11:13:32 -04:00
Olexandr88
8c8bca68b2 docs: edited the badge to an acceptable size (#30586)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-04-01 07:17:12 -04:00
Armaanjeet Singh Sandhu
4bbc249b13 community: Fix attribute access for transcript text in YoutubeLoader (Fixes #30309) (#30582)
**Description:** 
Fixes a bug in the YoutubeLoader where FetchedTranscript objects were
not properly processed. The loader was only extracting the 'text'
attribute from FetchedTranscriptSnippet objects while ignoring 'start'
and 'duration' attributes. This would cause a TypeError when the code
later tried to access these missing keys, particularly when using the
CHUNKS format or any code path that needed timestamp information.

This PR modifies the conversion of FetchedTranscriptSnippet objects to
include all necessary attributes, ensuring that the loader works
correctly with all transcript formats.

**Issue:** Fixes #30309

**Dependencies:** None

**Testing:**
- Tested the fix with multiple YouTube videos to confirm it resolves the
issue
- Verified that both regular loading and CHUNKS format work correctly
2025-04-01 07:13:06 -04:00
Ivan Brko
ecff055096 community[minor]: Improve Brave Search Tool, allow api key in env var (#30364)
- **Description:** 

- Make Brave Search Tool consistent with other tools and allow reading
its api key from `BRAVE_SEARCH_API_KEY` instead of having to pass the
api key manually (no breaking changes)
- Improve Brave Search Tool by storing api key in `SecretStr` instead of
plain `str`.
    - Add unit test for `BraveSearchWrapper`
    - Reflect the changes in the documentation
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** ivan_brko
2025-03-31 14:48:52 -04:00
ccurme
0c623045b5 core[patch]: pydantic 2.11 compat (#30554)
Release notes: https://pydantic.dev/articles/pydantic-v2-11-release

Covered here:

- We no longer access `model_fields` on class instances (that is now
deprecated);
- Update schema normalization for Pydantic version testing to reflect
changes to generated JSON schema (addition of `"additionalProperties":
True` for dict types with value Any or object).

## Considerations:

### Changes to JSON schema generation

#### Tool-calling / structured outputs

This may impact tool-calling + structured outputs for some providers,
but schema generation only changes if you have parameters of the form
`dict`, `dict[str, Any]`, `dict[str, object]`, etc. If dict parameters
are typed my understanding is there are no changes.

For OpenAI for example, untyped dicts work for structured outputs with
default settings before and after updating Pydantic, and error both
before/after if `strict=True`.

### Use of `model_fields`

There is one spot where we previously accessed `super(cls,
self).model_fields`, where `cls` is an object in the MRO. This was done
for the purpose of tracking aliases in secrets. I've updated this to
always be `type(self).model_fields`-- see comment in-line for detail.

---------

Co-authored-by: Sydney Runkle <54324534+sydney-runkle@users.noreply.github.com>
2025-03-31 14:22:57 -04:00
keshavshrikant
e8be3cca5c fix huggingface tokenizer default length function (#30185)
#30184
2025-03-31 11:54:30 -04:00
Fai LAW
4419340039 docs: add pre_filter usage in similarity_search_with_score (Azure Cosmos DB No SQL) (#30508)
`pre_filter` should be passed in the `Hybrid Search with filtering`
example. Otherwise, it is just an unused variable.
2025-03-31 11:33:00 -04:00
Wenqi Li
64f97e707e ollama[patch]: Support seed param for OllamaLLM (#30553)
**Description:** a description of the change
add the seed param for OllamaLLM client reproducibility

**Issue:** the issue # it fixes, if applicable
follow up of a similar issue
https://github.com/langchain-ai/langchain/issues/24703
see also https://github.com/langchain-ai/langchain/pull/24782

**Dependencies:** any dependencies required for this change
n/a
2025-03-31 11:28:49 -04:00
Christophe Bornet
8395abbb42 core: Fix test_stream_error_callback (#30228)
Fixes #29436

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-31 10:37:22 -04:00
Jorge Piedrahita Ortiz
b9e19c5f97 Docs: Add sambanova cloud embeddings docs (#30525)
- **Description:** Add samba nova cloud embeddings docs, only
samabastudio embeddings were supported, now in the latest release of
langchan_sambanova sambanova cloud embeddings is also available
2025-03-31 10:16:15 -04:00
Augusto César Perin
f4d1df1b2d docs: add missing with_config method to Runnable templates API reference (#30560)
Broken source/docs links for Runnable methods

### What was changed
Added the `with_config` method to the method lists in both Runnable
template files:
- docs/api_reference/templates/runnable_non_pydantic.rst
- docs/api_reference/templates/runnable_pydantic.rst
2025-03-31 10:08:02 -04:00
Christophe Bornet
026de908eb core: Add ruff rules G, FA, INP, AIR and ISC (#29334)
Fixes mostly for rules G. See
https://docs.astral.sh/ruff/rules/#flake8-logging-format-g
2025-03-31 10:05:23 -04:00
Brayden Zhong
e4515f308f community: update RankLLM integration and fix LangChain deprecation (#29931)
# Community: update RankLLM integration and fix LangChain deprecation

- [x] **Description:**  
- Removed `ModelType` enum (`VICUNA`, `ZEPHYR`, `GPT`) to align with
RankLLM's latest implementation.
- Updated `chain({query})` to `chain.invoke({query})` to resolve
LangChain 0.1.0 deprecation warnings from
https://github.com/langchain-ai/langchain/pull/29840.

- [x] **Dependencies:** No new dependencies added.  

- [x] **Tests and Docs:**  
- Updated RankLLM documentation
(`docs/docs/integrations/document_transformers/rankllm-reranker.ipynb`).
  - Fixed LangChain usage in related code examples.  

- [x] **Lint and Test:**  
- Ran `make format`, `make lint`, and verified functionality after
updates.
  - No breaking changes introduced.  

```
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.

If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
```

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2025-03-31 09:50:00 -04:00
ccurme
b4fe1f1ec0 groq: release 0.3.2 (#30570) 2025-03-31 13:29:45 +00:00
Karol Zmorski
c1acf6f756 docs: Add docs for WatsonxToolkit from langchain-ibm (#30340)
**Description:**

Added docs for `WatsonxToolkit` from `langchain-ibm`:
- Sample notebook

Updated provider file: `ibm.mdx`.
2025-03-31 09:18:37 -04:00
ccurme
9213d94057 docs: update cassettes for chat token usage tracking guide (#30558) 2025-03-30 14:57:15 -04:00
ccurme
9c682af8f3 langchain: release 0.3.22 (#30557)
Closes https://github.com/langchain-ai/langchain/issues/30536
2025-03-30 14:48:22 -04:00
ccurme
08796802ca docs: keep tutorial runnable in CI (#30556) 2025-03-30 18:34:05 +00:00
William FH
b075eab3e0 Include delayed inputs in langchain tracer (#30546) 2025-03-28 16:07:22 -07:00
Thommy257
372dc7f991 core[patch]: fix loss of partially initialized variables during prompt composition (#30096)
**Description:**
This PR addresses the loss of partially initialised variables when
composing different prompts. I.e. it allows the following snippet to
run:

```python
from langchain_core.prompts import ChatPromptTemplate

prompt = ChatPromptTemplate.from_messages([('system', 'Prompt {x} {y}')]).partial(x='1')
appendix = ChatPromptTemplate.from_messages([('system', 'Appendix {z}')])

(prompt + appendix).invoke({'y': '2', 'z': '3'})
```

Previously, this would have raised a `KeyError`, stating that variable
`x` remains undefined.

**Issue**
References issue #30049

**Todo**
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-28 20:41:57 +00:00
Koshik Debanath
e7883d5b9f langchain-openai: Support token counting for o-series models in ChatOpenAI (#30542)
Related to #30344

Add support for token counting for o-series models in
`test_token_counts.py`.

* **Update `_MODELS` and `_CHAT_MODELS` dictionaries**
- Add "o1", "o3", and "gpt-4o" to `_MODELS` and `_CHAT_MODELS`
dictionaries.

* **Update token counts**
  - Add token counts for "o1", "o3", and "gpt-4o" models.

---

For more details, open the [Copilot Workspace
session](https://copilot-workspace.githubnext.com/langchain-ai/langchain/pull/30542?shareId=ab208bf7-80a3-4b8d-80c4-2287486fedae).
2025-03-28 16:02:09 -04:00
Eugene Yurtsev
d075ad21a0 core[patch]: specify default event loop scope in pyproject.toml (#30543)
Specify default event loop scope
2025-03-28 19:51:19 +00:00
Ahmed Tammaa
f23c3e2444 text-splitters[patch]: Refactor HTMLHeaderTextSplitter for Enhanced Maintainability and Readability (#29397)
Please see PR #27678 for context

## Overview

This pull request presents a refactor of the `HTMLHeaderTextSplitter`
class aimed at improving its maintainability and readability. The
primary enhancements include simplifying the internal structure by
consolidating multiple private helper functions into a single private
method, thereby reducing complexity and making the codebase easier to
understand and extend. Importantly, all existing functionalities and
public interfaces remain unchanged.

## PR Goals

1. **Simplify Internal Logic**:
- **Consolidation of Private Methods**: The original implementation
utilized multiple private helper functions (`_header_level`,
`_dom_depth`, `_get_elements`) to manage different aspects of HTML
parsing and document generation. This fragmentation increased cognitive
load and potential maintenance overhead.
- **Streamlined Processing**: By merging these functionalities into a
single private method (`_generate_documents`), the class now offers a
more straightforward flow, making it easier for developers to trace and
understand the processing steps. (Thanks to @eyurtsev)

2. **Enhance Readability**:
- **Clearer Method Responsibilities**: With fewer private methods, each
method now has a more focused responsibility. The primary logic resides
within `_generate_documents`, which handles both HTML traversal and
document creation in a cohesive manner.
- **Reduced Redundancy**: Eliminating redundant checks and consolidating
logic reduces the code's verbosity, making it more concise without
sacrificing clarity.

3. **Improve Maintainability**:
- **Easier Debugging and Extension**: A simplified internal structure
allows for quicker identification of issues and easier implementation of
future enhancements or feature additions.
- **Consistent Header Management**: The new implementation ensures that
headers are managed consistently within a single context, reducing the
likelihood of bugs related to header scope and hierarchy.

4. **Maintain Backward Compatibility**:
- **Unchanged Public Interface**: All public methods (`split_text`,
`split_text_from_url`, `split_text_from_file`) and their signatures
remain unchanged, ensuring that existing integrations and usage patterns
are unaffected.
- **Preserved Docstrings**: Comprehensive docstrings are retained,
providing clear documentation for users and developers alike.

## Detailed Changes

1. **Removed Redundant Private Methods**:
- **Eliminated `_header_level`, `_dom_depth`, and `_get_elements`**:
These methods were merged into the `_generate_documents` method,
centralizing the logic for HTML parsing and document generation.

2. **Consolidated Document Generation Logic**:
- **Single Private Method `_generate_documents`**: This method now
handles the entire process of parsing HTML, tracking active headers,
managing document chunks, and yielding `Document` instances. This
consolidation reduces the number of moving parts and simplifies the
overall processing flow.

3. **Simplified Header Management**:
- **Immediate Header Scope Handling**: Headers are now managed within
the traversal loop of `_generate_documents`, ensuring that headers are
added or removed from the active headers dictionary in real-time based
on their DOM depth and hierarchy.
- **Removed `chunk_dom_depth` Attribute**: The need to track chunk DOM
depth separately has been eliminated, as header scopes are now directly
managed within the traversal logic.

4. **Streamlined Chunk Finalization**:
- **Enhanced `finalize_chunk` Function**: The chunk finalization process
has been simplified to directly yield a single `Document` when needed,
without maintaining an intermediate list. This change reduces
unnecessary list operations and makes the logic more straightforward.

5. **Improved Variable Naming and Flow**:
- **Descriptive Variable Names**: Variables such as `current_chunk` and
`node_text` provide clear insights into their roles within the
processing logic.
- **Direct Header Removal Logic**: Headers that are out of scope are
removed immediately during traversal, ensuring that the active headers
dictionary remains accurate and up-to-date.

6. **Preserved Comprehensive Docstrings**:
- **Unchanged Documentation**: All existing docstrings, including
class-level and method-level documentation, remain intact. This ensures
that users and developers continue to have access to detailed usage
instructions and method explanations.

## Testing

All existing test cases from `test_html_header_text_splitter.py` have
been executed against the refactored code. The results confirm that:

- **Functionality Remains Intact**: The splitter continues to accurately
parse HTML content, respect header hierarchies, and produce the expected
`Document` objects with correct metadata.
- **Backward Compatibility is Maintained**: No changes were required in
the test cases, and all tests pass without modifications, demonstrating
that the refactor does not introduce any regressions or alter existing
behaviors.


This example remains fully operational and behaves as before, returning
a list of `Document` objects with the expected metadata and content
splits.

## Conclusion

This refactor achieves a more maintainable and readable codebase by
simplifying the internal structure of the `HTMLHeaderTextSplitter`
class. By consolidating multiple private methods into a single, cohesive
private method, the class becomes easier to understand, debug, and
extend. All existing functionalities are preserved, and comprehensive
tests confirm that the refactor maintains the expected behavior. These
changes align with LangChain’s standards for clean, maintainable, and
efficient code.

---

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-28 15:36:00 -04:00
Christophe Bornet
86beb64b50 docs: Add doc for Vectorize provider (#30436)
This pull request adds documentation and a tutorial for integrating the
[Vectorize](https://vectorize.io/) service with LangChain. The most
important changes include adding a new documentation page for Vectorize
and creating a Jupyter notebook that demonstrates how to use the
Vectorize retriever.

The source code for the langchain-vectorize package can be found
[here](https://github.com/vectorize-io/integrations-python/tree/main/langchain).

Previews:
*
https://langchain-git-fork-cbornet-vectorize-langchain.vercel.app/docs/integrations/providers/vectorize/
*
https://langchain-git-fork-cbornet-vectorize-langchain.vercel.app/docs/integrations/retrievers/vectorize/

Documentation updates:

*
[`docs/docs/integrations/providers/vectorize.mdx`](diffhunk://#diff-7e00d4ce4768f73b4d381a7c7b1f94d138f1b27ebd08e3666b942630a0285606R1-R40):
Added a new documentation page for Vectorize, including an overview of
its features, installation instructions, and a basic usage example.

Tutorial updates:

*
[`docs/docs/integrations/retrievers/vectorize.ipynb`](diffhunk://#diff-ba5bb9a1b4586db7740944b001bcfeadc88be357640ded0c82a329b11d8d6e29R1-R294):
Created a Jupyter notebook tutorial that shows how to set up the
Vectorize environment, create a RAG pipeline, and use the LangChain
Vectorize retriever. The notebook includes steps for account creation,
token generation, environment setup, and pipeline deployment.
2025-03-28 15:25:21 -04:00
omahs
6f8735592b docs,langchain-community: Fix typos in docs and code (#30541)
Fix typos
2025-03-28 19:21:16 +00:00
Agus
47d50f49d9 docs: Add GOAT integration to docs (#30478)
This PR adds:
1. Docs for the GOAT integration 
2. An "Agentic Finance" table to the Tools page that includes GOAT

**Twitter handle**: @0xaguspunk
2025-03-28 15:19:37 -04:00
Shixian Sheng
94a7fd2497 docs: fix broken hyperlinks in fireworks integration package README (#30538)
Fix two broken hyperlinks
2025-03-28 15:18:44 -04:00
Oskar Stark
0d2cea747c docs: streamline LangSmith teasing (#30302)
This can only be reviewed by [hiding
whitespaces](https://github.com/langchain-ai/langchain/pull/30302/files?diff=unified&w=1).

The motivation behind this PR is to get my hands on the docs and make
the LangSmith teasing short and clear.

Right now I don't know how to do it, but this could be an include in the
future.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-28 15:13:22 -04:00
Eugene Yurtsev
dd0faab07e fix types 2025-03-28 14:23:50 -04:00
Eugene Yurtsev
21ab1dc675 Merge branch 'master' of github.com:xzq-xu/langchain into xzq-xu/master 2025-03-28 13:56:49 -04:00
Eugene Yurtsev
22cee5d983 x 2025-03-28 13:56:10 -04:00
Eugene Yurtsev
a14d8b103b Merge branch 'master' into master 2025-03-28 13:53:58 -04:00
Eugene Yurtsev
6d22f40a0b x 2025-03-28 13:51:06 -04:00
Philippe PRADOS
92189c8b31 community[patch]: Handle gray scale images in ImageBlobParser (Fixes 30261 and 29586) (#30493)
Fix [29586](https://github.com/langchain-ai/langchain/issues/29586) and
[30261](https://github.com/langchain-ai/langchain/pull/30261)
2025-03-28 10:15:40 -04:00
小豆豆学长
1f0686db80 community: add netmind integration (#30149)
Co-authored-by: yanrujing <rujing.yan@protagonist-ai.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2025-03-27 15:27:04 -04:00
Kyungho Byoun
e6b6c07395 community: add HANA dialect to SQLDatabase (#30475)
This PR includes support for HANA dialect in SQLDatabase, which is a
wrapper class for SQLAlchemy.

Currently, it is unable to set schema name when using HANA DB with
Langchain. And, it does not show any message to user so that it makes
hard for user to figure out why the SQL does not work as expected.

Here is the reference document for HANA DB to set schema for the
session.

- [SET SCHEMA Statement (Session
Management)](https://help.sap.com/docs/SAP_HANA_PLATFORM/4fe29514fd584807ac9f2a04f6754767/20fd550375191014b886a338afb4cd5f.html)
2025-03-27 15:19:50 -04:00
Eugene Yurtsev
1cf91a2386 docs: fix llms-txt (#30528)
* Fix trailing slashes
* Fix chat model integration links
2025-03-27 19:02:44 +00:00
Christophe Bornet
e181d43214 core: Bump ruff version to 0.11 (#30519)
Changes are from the new TC006 rule:
https://docs.astral.sh/ruff/rules/runtime-cast-value/
TC006 is auto-fixed.
2025-03-27 13:01:49 -04:00
ccurme
59908f04d4 fireworks: release 0.2.9 (#30527) 2025-03-27 16:04:20 +00:00
ccurme
05482877be mistralai: release 0.2.10 (#30526) 2025-03-27 16:01:40 +00:00
Andras L Ferenczi
63673b765b Fix: Enable max_retries Parameter in ChatMistralAI Class (#30448)
**partners: Enable max_retries in ChatMistralAI**

**Description**

- This pull request reactivates the retry logic in the
completion_with_retry method of the ChatMistralAI class, restoring the
intended functionality of the previously ineffective max_retries
parameter. New unit test that mocks failed/successful retry calls and an
integration test to confirm end-to-end functionality.

**Issue**
- Closes #30362

**Dependencies**
- No additional dependencies required

Co-authored-by: andrasfe <andrasf94@gmail.com>
2025-03-27 11:53:44 -04:00
Lakindu Boteju
3aa080c2a8 Fix typos in pdfminer and pymupdf documentations (#30513)
This pull request includes fixes in documentation for PDF loaders to
correct the names of the loaders and the required installations. The
most important changes include updating the loader names and
installation instructions in the Jupyter notebooks.

Documentation fixes:

*
[`docs/docs/integrations/document_loaders/pdfminer.ipynb`](diffhunk://#diff-a4a0561cd4a6e876ea34b7182de64a452060b921bb32d37b02e6a7980a41729bL34-R34):
Changed references from `PyMuPDFLoader` to `PDFMinerLoader` and updated
the installation instructions to replace `pymupdf` with `pdfminer`.
[[1]](diffhunk://#diff-a4a0561cd4a6e876ea34b7182de64a452060b921bb32d37b02e6a7980a41729bL34-R34)
[[2]](diffhunk://#diff-a4a0561cd4a6e876ea34b7182de64a452060b921bb32d37b02e6a7980a41729bL63-R63)
[[3]](diffhunk://#diff-a4a0561cd4a6e876ea34b7182de64a452060b921bb32d37b02e6a7980a41729bL330-R330)

*
[`docs/docs/integrations/document_loaders/pymupdf.ipynb`](diffhunk://#diff-8487995f457e33daa2a08fdcff3b42e144eca069eeadfad5651c7c08cce7a5cdL292-R292):
Corrected the loader name from `PDFPlumberLoader` to `PyMuPDFLoader`.
2025-03-27 11:29:11 -04:00
Miguel Grinberg
14b7d790c1 docs: Restore accidentally deleted docs on Elasticsearch strategies (#30521)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Adding back a section of the Elasticsearch
vectorstore documentation that was deleted in [this
commit]([a72fddbf8d (diff-4988344c6ccc08191f89ac1ebf1caab5185e13698d7567fde5352038cd950d77))).
The only change I've made is to update the example RRF request, which
was out of date.


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, eyurtsev, ccurme, vbarda, hwchase17.
2025-03-27 11:27:20 -04:00
ccurme
0b2244ea88 Revert "docs: restore some content to Elasticsearch integration page" (#30523)
Reverts langchain-ai/langchain#30522 in favor of
https://github.com/langchain-ai/langchain/pull/30521.
2025-03-27 15:12:36 +00:00
ccurme
80064893c1 docs: restore some content to Elasticsearch integration page (#30522)
https://github.com/langchain-ai/langchain/pull/24858 standardized vector
store integration pages, but deleted some content.

Here we merge some of the old content back in. We use this version as a
reference:
2c798622cd/docs/docs/integrations/vectorstores/elasticsearch.ipynb
2025-03-27 11:07:19 -04:00
Keiichi Hirobe
956b09f468 core[patch]: stop deleting records with "scoped_full" when doc is empty (#30520)
Fix a bug that causes `scoped_full` in index to delete records when there are no input docs.
2025-03-27 11:04:34 -04:00
Christophe Bornet
b28a474e79 core[patch]: Add ruff rules for PLW (Pylint Warnings) (#29288)
See https://docs.astral.sh/ruff/rules/#warning-w_1

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-27 10:26:12 +00:00
xzq.xu
92dc3f7341 format test lint passed 2025-03-27 13:44:59 +08:00
xzq.xu
d0a9808148 modify test name 2025-03-27 13:34:51 +08:00
xzq.xu
ed2428f902 add a unit test 2025-03-27 12:43:16 +08:00
David Sánchez Sánchez
75823d580b community: fix perplexity response parameters not being included in model response (#30440)
This pull request includes enhancements to the `perplexity.py` file in
the `chat_models` module, focusing on improving the handling of
additional keyword arguments (`additional_kwargs`) in message processing
methods. Additionally, new unit tests have been added to ensure the
correct inclusion of citations, images, and related questions in the
`additional_kwargs`.

Issue: resolves https://github.com/langchain-ai/langchain/issues/30439

Enhancements to `perplexity.py`:

*
[`libs/community/langchain_community/chat_models/perplexity.py`](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fL208-L212):
Modified the `_convert_delta_to_message_chunk`, `_stream`, and
`_generate` methods to handle `additional_kwargs`, which include
citations, images, and related questions.
[[1]](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fL208-L212)
[[2]](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fL277-L286)
[[3]](diffhunk://#diff-d3e4d7b277608683913b53dcfdbd006f0f4a94d110d8b9ac7acf855f1f22207fR324-R331)

New unit tests:

*
[`libs/community/tests/unit_tests/chat_models/test_perplexity.py`](diffhunk://#diff-dab956d79bd7d17a0f5dea3f38ceab0d583b43b63eb1b29138ee9b6b271ba1d9R119-R275):
Added new tests `test_perplexity_stream_includes_citations_and_images`
and `test_perplexity_stream_includes_citations_and_related_questions` to
verify that the `stream` method correctly includes citations, images,
and related questions in the `additional_kwargs`.
2025-03-26 22:28:08 -04:00
Eugene Yurtsev
7664874a0d docs: llms-txt (#30506)
First just verifying it's included in the manifest
2025-03-26 22:21:59 -04:00
Adeel Ehsan
d7d0bca2bc docs: add vectara to libs package yml (#30504) 2025-03-26 16:47:53 -04:00
ccurme
3781144710 docs: update doc on token usage tracking (#30505) 2025-03-26 16:13:45 -04:00
ccurme
a9b1e1b177 openai: release 0.3.11 (#30503) 2025-03-26 19:24:37 +00:00
ccurme
8119a7bc5c openai[patch]: support streaming token counts in AzureChatOpenAI (#30494)
When OpenAI originally released `stream_options` to enable token usage
during streaming, it was not supported in AzureOpenAI. It is now
supported.

Like the [OpenAI
SDK](f66d2e6fdc/src/openai/resources/completions.py (L68)),
ChatOpenAI does not return usage metadata during streaming by default
(which adds an extra chunk to the stream). The OpenAI SDK requires users
to pass `stream_options={"include_usage": True}`. ChatOpenAI implements
a convenience argument `stream_usage: Optional[bool]`, and an attribute
`stream_usage: bool = False`.

Here we extend this to AzureChatOpenAI by moving the `stream_usage`
attribute and `stream_usage` kwarg (on `_(a)stream`) from ChatOpenAI to
BaseChatOpenAI.

---

Additional consideration: we must be sensitive to the number of users
using BaseChatOpenAI to interact with other APIs that do not support the
`stream_options` parameter.

Suppose OpenAI in the future updates the default behavior to stream
token usage. Currently, BaseChatOpenAI only passes `stream_options` if
`stream_usage` is True, so there would be no way to disable this new
default behavior.

To address this, we could update the `stream_usage` attribute to
`Optional[bool] = None`, but this is technically a breaking change (as
currently values of False are not passed to the client). IMO: if / when
this change happens, we could accompany it with this update in a minor
bump.

--- 

Related previous PRs:
- https://github.com/langchain-ai/langchain/pull/22628
- https://github.com/langchain-ai/langchain/pull/22854
- https://github.com/langchain-ai/langchain/pull/23552

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2025-03-26 15:16:37 -04:00
Adeel Ehsan
56629ed87b docs: updated the docs for vectara (#30398)
Thank you for contributing to LangChain!

**PR title**: Docs Update for vectara
**Description:** Vectara is moved as langchain partner package and
updating the docs according to that.
2025-03-26 15:02:21 -04:00
ccurme
f68eaab44f tests: release 0.3.17 (#30502) 2025-03-26 18:56:54 +00:00
Louis Auneau
0b532a4ed0 community: Azure Document Intelligence parser features not available fixed (#30370)
Thank you for contributing to LangChain!

- **Description:** Azure Document Intelligence OCR solution has a
*feature* parameter that enables some features such as high-resolution
document analysis, key-value pairs extraction, ... In langchain parser,
you could be provided as a `analysis_feature` parameter to the
constructor that was passed on the `DocumentIntelligenceClient`.
However, according to the `DocumentIntelligenceClient` [API
Reference](https://learn.microsoft.com/en-us/python/api/azure-ai-documentintelligence/azure.ai.documentintelligence.documentintelligenceclient?view=azure-python),
this is not a valid constructor parameter. It was therefore remove and
instead stored as a parser property that is used in the
`begin_analyze_document`'s `features` parameter (see [API
Reference](https://learn.microsoft.com/en-us/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.documentanalysisclient?view=azure-python#azure-ai-formrecognizer-documentanalysisclient-begin-analyze-document)).
I also removed the check for "Supported features" since all features are
supported out-of-the-box. Also I did not check if the provided `str`
actually corresponds to the Azure package enumeration of features, since
the `ValueError` when creating the enumeration object is pretty
explicit.
Last caveat, is that some features are not supported for some kind of
documents. This is documented inside Microsoft documentation and
exception are also explicit.
- **Issue:** N/A
- **Dependencies:** No
- **Twitter handle:** @Louis___A

---------

Co-authored-by: Louis Auneau <louis@handshakehealth.co>
2025-03-26 14:40:14 -04:00
Really Him
fbd2e10703 docs: hide jsx in llm chain tutorial (#30187)
## **Description:** 
The Jupyter notebooks in the docs section are extremely useful and
critical for widespread adoption of LangChain amongst new developers.
However, because they are also converted to MDX and used to build the
HTML for the Docusaurus site, they contain JSX code that degrades
readability when opened in a "notebook" setting (local notebook server,
google colab, etc.). For instance, here we see the website, with a nice
React tab component for installation instructions (`pip` vs `conda`):

![Screenshot 2025-03-07 at 2 07
15 PM](https://github.com/user-attachments/assets/a528d618-f5a0-4d2e-9aed-16d4b8148b5a)

Now, here is the same notebook viewed in colab:

![Screenshot 2025-03-07 at 2 08
41 PM](https://github.com/user-attachments/assets/87acf5b7-a3e0-46ac-8126-6cac6eb93586)

Note that the text following "To install LangChain run:" contains
snippets of JSX code that is (i) confusing, (ii) bad for readability,
(iii) potentially misleading for a novice developer, who might take it
literally to mean that "to install LangChain I should run `import Tabs
from...`" and then an ill-formed command which mixes the `pip` and
`conda` installation instructions.

Ideally, we would like to have a system that presents a
similar/equivalent UI when viewing the notebooks on the documentation
site, or when interacting with them in a notebook setting - or, at a
minimum, we should not present ill-formed JSX snippets to someone trying
to execute the notebooks. As the documentation itself states, running
the notebooks yourself is a great way to learn the tools. Therefore,
these distracting and ill-formed snippets are contrary to that goal.

## **Fixes:**
* Comment out the JSX code inside the notebook
`docs/tutorials/llm_chain` with a special directive `<!-- HIDE_IN_NB`
(closed with `HIDE_IN_NB -->`). This makes the JSX code "invisible" when
viewed in a notebook setting.
* Add a custom preprocessor that runs process_cell and just erases these
comment strings. This makes sure they are rendered when converted to
MDX.
* Minor tweak: Refactor some of the Markdown instructions into an
executable codeblock for better experience when running as a notebook.
* Minor tweak: Optionally try to get the environment variables from a
`.env` file in the repo so the user doesn't have to enter it every time.
Depends on the user installing `python-dotenv` and adding their own
`.env` file.
* Add an environment variable for "LANGSMITH_PROJECT"
(default="default"), per the LangSmith docs, so a local user can target
a specific project in their LangSmith account.

**NOTE:** If this PR is approved, and the maintainers agree with the
general goal of aligning the notebook execution experience and the doc
site UI, I would plan to implement this on the rest of the JSX snippets
that are littered in the notebooks.

**NOTE:** I wasn't able to/don't know how to run the linkcheck Makefile
commands.

- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Really Him <hesereallyhim@proton.me>
2025-03-26 14:22:33 -04:00
Philippe PRADOS
8e5d2a44ce community[patch]: update PyPDFParser to take into account filters returned as arrays (#30489)
The image parsing is generating a bug as the the extracted objects for
the /Filter returns sometimes an array, sometimes a string.

Fix [Issue
30098](https://github.com/langchain-ai/langchain/issues/30098)
2025-03-26 14:16:54 -04:00
ccurme
422ba4cde5 infra: handle flaky tests (#30501) 2025-03-26 13:28:56 -04:00
xzq.xu
913c8b71d9 format import 2025-03-26 23:34:38 +08:00
xzq.xu
7e3dea5db8 add a new-line 2025-03-26 23:32:07 +08:00
xzq.xu
d602141ab1 remove unused e 2025-03-26 23:10:41 +08:00
xzq.xu
dd9031fc82 _prep_run_args,tool_input copy, Exception 2025-03-26 23:06:43 +08:00
xzq.xu
3382b0d8ea _prep_run_args,tool_input copy 2025-03-26 22:56:32 +08:00
xzq.xu
e90abce577 Merge remote-tracking branch 'origin/master' 2025-03-26 22:42:15 +08:00
xzq.xu
c127ae9d26 fix the format 2025-03-26 22:41:58 +08:00
xzq.xu
65ecc22606 # Fix: Prevent run_manager from being added to state object 2025-03-26 22:36:31 +08:00
1059 changed files with 54502 additions and 33204 deletions

View File

@@ -29,14 +29,14 @@ body:
options:
- label: I added a very descriptive title to this issue.
required: true
- label: I searched the LangChain documentation with the integrated search.
required: true
- label: I used the GitHub search to find a similar question and didn't find it.
required: true
- label: I am sure that this is a bug in LangChain rather than my code.
required: true
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
required: true
- label: I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.
required: true
- type: textarea
id: reproduction
validations:

View File

@@ -76,6 +76,7 @@ jobs:
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
run: |
make integration_tests

View File

@@ -327,6 +327,7 @@ jobs:
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
XAI_API_KEY: ${{ secrets.XAI_API_KEY }}
DEEPSEEK_API_KEY: ${{ secrets.DEEPSEEK_API_KEY }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
@@ -394,8 +395,11 @@ jobs:
# Checkout the latest package files
rm -rf $GITHUB_WORKSPACE/libs/partners/${{ matrix.partner }}/*
cd $GITHUB_WORKSPACE/libs/partners/${{ matrix.partner }}
git checkout "$LATEST_PACKAGE_TAG" -- .
rm -rf $GITHUB_WORKSPACE/libs/standard-tests/*
cd $GITHUB_WORKSPACE/libs/
git checkout "$LATEST_PACKAGE_TAG" -- standard-tests/
git checkout "$LATEST_PACKAGE_TAG" -- partners/${{ matrix.partner }}/
cd partners/${{ matrix.partner }}
# Print as a sanity check
echo "Version number from pyproject.toml: "

View File

@@ -0,0 +1,29 @@
name: Check `langchain-core` version equality
on:
pull_request:
paths:
- 'libs/core/pyproject.toml'
- 'libs/core/langchain_core/version.py'
jobs:
check_version_equality:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check version equality
run: |
PYPROJECT_VERSION=$(grep -Po '(?<=^version = ")[^"]*' libs/core/pyproject.toml)
VERSION_PY_VERSION=$(grep -Po '(?<=^VERSION = ")[^"]*' libs/core/langchain_core/version.py)
# Compare the two versions
if [ "$PYPROJECT_VERSION" != "$VERSION_PY_VERSION" ]; then
echo "langchain-core versions in pyproject.toml and version.py do not match!"
echo "pyproject.toml version: $PYPROJECT_VERSION"
echo "version.py version: $VERSION_PY_VERSION"
exit 1
else
echo "Versions match: $PYPROJECT_VERSION"
fi

44
.github/workflows/codspeed.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: CodSpeed
on:
push:
branches:
- master
pull_request:
paths:
- 'libs/core/**'
# `workflow_dispatch` allows CodSpeed to trigger backtest
# performance analysis in order to generate initial data.
workflow_dispatch:
jobs:
codspeed:
name: Run benchmarks
if: (github.event_name == 'pull_request' && contains(github.event.pull_request.labels.*.name, 'run-codspeed-benchmarks')) || github.event_name == 'workflow_dispatch' || github.event_name == 'push'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# We have to use 3.12, 3.13 is not yet supported
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
python-version: "3.12"
# Using this action is still necessary for CodSpeed to work
- uses: actions/setup-python@v3
with:
python-version: "3.12"
- name: install deps
run: uv sync --group test
working-directory: ./libs/core
- name: Run benchmarks
uses: CodSpeedHQ/action@v3
with:
token: ${{ secrets.CODSPEED_TOKEN }}
run: |
cd libs/core
uv run --no-sync pytest ./tests/benchmarks --codspeed
mode: walltime

View File

@@ -6,11 +6,6 @@ on:
push:
branches: [jacob/people]
workflow_dispatch:
inputs:
debug_enabled:
description: 'Run the build with tmate debugging enabled (https://github.com/marketplace/actions/debugging-with-tmate)'
required: false
default: 'false'
jobs:
langchain-people:
@@ -26,12 +21,6 @@ jobs:
# Ref: https://github.com/actions/runner/issues/2033
- name: Fix git safe.directory in container
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
# Allow debugging with tmate
- name: Setup tmate session
uses: mxschmitt/action-tmate@v3
if: ${{ github.event_name == 'workflow_dispatch' && github.event.inputs.debug_enabled == 'true' }}
with:
limit-access-to-actor: true
- uses: ./.github/actions/people
with:
token: ${{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}

View File

@@ -61,6 +61,7 @@ jobs:
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

View File

@@ -145,6 +145,7 @@ jobs:
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
PPLX_API_KEY: ${{ secrets.PPLX_API_KEY }}
run: |
cd langchain/${{ matrix.working-directory }}
make integration_tests

1
.gitignore vendored
View File

@@ -59,6 +59,7 @@ coverage.xml
*.py,cover
.hypothesis/
.pytest_cache/
.codspeed/
# Translations
*.mo

View File

@@ -15,8 +15,9 @@
[![GitHub star chart](https://img.shields.io/github/stars/langchain-ai/langchain?style=flat-square)](https://star-history.com/#langchain-ai/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/langchain-ai/langchain?style=flat-square)](https://github.com/langchain-ai/langchain/issues)
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode&style=flat-square)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/langchain-ai/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/langchain-ai/langchain)
[<img src="https://github.com/codespaces/badge.svg" title="Open in Github Codespace" width="150" height="20">](https://codespaces.new/langchain-ai/langchain)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![CodSpeed Badge](https://img.shields.io/endpoint?url=https://codspeed.io/badge.json)](https://codspeed.io/langchain-ai/langchain)
> [!NOTE]
> Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langchain-ai/langchainjs).

View File

@@ -30,7 +30,7 @@
"outputs": [],
"source": [
"# lock to 0.10.19 due to a persistent bug in more recent versions\n",
"! pip install \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml pillow matplotlib tiktoken open_clip_torch torch"
"! pip install \"unstructured[all-docs]==0.10.19\" pillow pydantic lxml matplotlib tiktoken open_clip_torch torch"
]
},
{
@@ -409,7 +409,7 @@
" table_summaries,\n",
" tables,\n",
" image_summaries,\n",
" image_summaries,\n",
" img_base64_list,\n",
")"
]
},

View File

@@ -358,7 +358,7 @@
"id": "6e5cd014-db86-4d6b-8399-25cae3da5570",
"metadata": {},
"source": [
"## Helper function to plot retrived similar images"
"## Helper function to plot retrieved similar images"
]
},
{

View File

@@ -11,6 +11,7 @@
import json
import os
import sys
from datetime import datetime
from pathlib import Path
import toml
@@ -104,7 +105,7 @@ def skip_private_members(app, what, name, obj, skip, options):
# -- Project information -----------------------------------------------------
project = "🦜🔗 LangChain"
copyright = "2023, LangChain Inc"
copyright = f"{datetime.now().year}, LangChain Inc"
author = "LangChain, Inc"
html_favicon = "_static/img/brand/favicon.png"
@@ -275,3 +276,7 @@ if os.environ.get("READTHEDOCS", "") == "True":
html_context["READTHEDOCS"] = True
master_doc = "index"
# If a signatures length in characters exceeds 60,
# each parameter within the signature will be displayed on an individual logical line
maximum_signature_line_length = 60

View File

@@ -7,7 +7,7 @@
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_config <langchain_core.runnables.base.Runnable.with_config>`, :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
{% block attributes %}
{% if attributes %}

View File

@@ -19,6 +19,6 @@
.. NOTE:: {{objname}} implements the standard :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>`. 🏃
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
The :py:class:`Runnable Interface <langchain_core.runnables.base.Runnable>` has additional methods that are available on runnables, such as :py:meth:`with_config <langchain_core.runnables.base.Runnable.with_config>`, :py:meth:`with_types <langchain_core.runnables.base.Runnable.with_types>`, :py:meth:`with_retry <langchain_core.runnables.base.Runnable.with_retry>`, :py:meth:`assign <langchain_core.runnables.base.Runnable.assign>`, :py:meth:`bind <langchain_core.runnables.base.Runnable.bind>`, :py:meth:`get_graph <langchain_core.runnables.base.Runnable.get_graph>`, and more.
.. example_links:: {{ objname }}

View File

@@ -1 +0,0 @@
eNrtVmlUFFcWJsGFmBk00XGJW9OikyDVVO8LommgwQ5ikxZUQAarq17TBV2LVdXQwDgKxhg1iZQyJkhGoyytiCjgoKgY0cRxQRj1iIe4xMkYiXNy1IgROaLOa5YRj/6cP5mxzuk6XfXuu/e7937v1pfvyQQcTzL0K5UkLQAOwwX4wK/P93BgiQvwwgflFBAcDFEaZ5kXX+LiyLYghyCwvCEkBGNJGcMCGiNlOEOFZMpDcAcmhMD/rBP0uCm1MUR2G5srpQDPY2mAlxqSc6U4AyPRgtQgjQdOp4QCEkySzmQAabCUY5wAvnfxgJMuTQmWUgwBnPBFGisgKgahSJqEVrzAAYySGuyYkwfBUgFQLEQuuDi4F5WhSz0OgBEwras+I0odDC+IVc9C3Y3hOIAeAY0zBEmnibvSckg2WEIAuxMTQAUESIOeQogVGQCwCOYkM0F57y5xD8ayThLHvOsh6TxDV/YlhAjZLHh+ucKbDQKzpwVxrwWCMJpD4rJhTWmJXKaRy+R73AgvYCTthEVCnBjEU872rB8cuMBieAZ0gvT1Syzv3Vw10IbhxbJYDLfMe8YlxuEOsQzjKI2qduB7zkULJAVET0Tc8+H6Fp+GU8rkCpmu+hnHfDaNi2U9bdj3zGYgcNkIzkAf4la0HGeYDBKIbXdTU3F7qo0Km2NCoy2kJWGJFTUlzVVl8Hi21pZqtfAul9nORlrM6a73c2h+NkcyiFyr1Gl1ehTVIXIZKoMpI2p9hs5BWGZHuMzuKDQyJtwSu0Rn5GJ182z0wki5PUuXoGc5U6SGRuPpeE1SGodRQlSmiiA0RIIuNsklk0U7+Sgnk8CkJ8oUiXGpcxRoVqgEonNlkkQYatQnLnFHxsforQ4NRrgzIq0uR1YCistiw91ah5GLizImpVnTFWzaAHgauRZB+xBqUJUO9V5V/dxwAjpNcIglcrliOwd4Fp4OsKIclkxw8fmlkIeg6YSn75hss8Q8pfDo0kjISbFhASCCJahaEgVsEgWqUMObQa0xqPSS6Nj4yoi+MPEvpGB1PIfRvB3S0NRPeQ/ucNEZgKiIeCHZG7xkh530woeHEgFuluEB0odKrFyIWHvnA2KOrO09WQjDpWE0mdMTVmzoYX1WjjuLwF0E4cjMolB9jkpJ2oALt+/t28JyjDcMBIRQvFiiU6ir+lb6eVcBc0UROYqg8gNuBJ5y4CQpEtaz5943pHixVA2Lvf95AwHOFZoXPaqebqCHB1pwgIKE9cZ+6kal1+sPvdio35USmui1+gPPWvFgIBq5guL3P2/Q52Ibyle6+60RkhDbAuFDqhpVaFDcrleplEoVhioBIVdDetlwvV0D1JimHs49EodevM1kGU5AeIDDiSxki23BFOb2zpgwpVyt1MBMQyUkjTtdBJjnskUy3hz4UAnLASeDEbtxO4JjuAMgvfwTPZGJc42x5oi6hchAIiEWtmeMix6a4WnSbi+fBzjYGLECdzIuAg5LDpRHRCFWY6K4V48Satyu0MltQKuzY3YkHI6hfm//oV2pd9J6MCfEnomLtQ5lmNQA85WGSigsTKeBber5ZuSVe3Ol07555dTktX4+PZcv/D158rG1mr6Ejrjyy+8WNM8PSDZOzX3Nujciz//mCMExLG+a5dzUUScL4v8sufPVmMJRMVHJX76+qvPWpO8OHrw4aLlIDF287QejML9M23LmWMGy+oZc5xpmX9aW7vPt3+/4adPu9WvzKNsq/cwv/ZiU2ytvVh59N8qylz3hG3WBEj8z1bfUcIGGKe9/U3f8w3unSieS72W13Qo6s0ZIbr6yKyBv88Rr0a/63DEcPuvZ+KDJv4C6WuDf0P7bmmP1fsv1JdKj+zuv+17Mo0Zbx06MW9f61b4QiTnc8caha/4rlb6/Kc3LuOabdfetW780PP4ov3PSVtPY/UPPr9oeZNa++1bS3biRQV30o0+OHeicEXWhwar8k+Hh/PLq5gmPmrb8vEnVVcedaDh7tmT48RoxbKrPrL913RsjFrnKa7cOu/GGa9Rl+vf6Oicw3UpTfH7kQn57x5yQC0Gr8zbc3dmlaDSt5W4u2tiYa7abAnc2LdvUVXy/sFjomHX39YBB3y6vqfGfuvr0kZiiY+7aJ0GhLW5V04wDqYF7hn0gu2W0nclVp7zZETJz8YILpeOF1rPC3L/u+rSmqePK4ccFtdcLu4d4G+br83hqVcci2L2XmuKlpnipKX4FmkKN/i9pCo32/1hTKJR2zEZo9Da72qbUqgGmVGJ2XKEiMB1QauzyX4WmACiw/zc1xfGnmuJja+On59ERDe3Th2wZX1hnJHONgjE8f+SE0atNYoZlfeKESzV/oT5LeDJRf1KzY3FZw4M7V5s6cpUdhT7oeufQOaoZH7VW3Du24/CspY+6Sx7Wtkx/FPbw/g/3p9zQah+u6j6/CYF64nKJX0sKs5LcBfXE8F49AaqvjiqmNlK529RvJjh3UrP5DXWl76SAnTOgnqgfd3B8xKthlxpbBvlc27GsxIL8/HVmAbJZHN3Qrq65UjZQTyDTvHqis5Upji4IDFD5/YFddbRoygSz78Uj6y6f7GQ9t+9NGnx7RvgK9vqkOWO2VQSm+JKOExGm7deLu7T+O5s7xiZkvffJg67vk4u+NfB3pixrnH7uevG5gM6o6HUzx57Ouf+FT/e/2s+nmOytRa0fKj7/bnvJNU5/opqcruRC1zcOqpr2x7Dme0m7Csfq3plVHDJykeptTzM57tAGwh6pON29u+gJObzefHG3uMUv4O9DCstcb5sXDd18asLx4qU7flL88yra8eOk2YvH+aVcXvH1mqz0/YE3Ji89Ulc5++KPlSX/+CIoefT0wsGzDp0eHNN8MtWnV0+8Jgua7NUT/wZqjsb/

View File

@@ -1 +1 @@
eNrtWE1v28gZ7qK3XHopemaJngqPREqkvgyhcPxRK15bie1d21kshNHwlTgWyWE4Q1ly6kPT3gsW/QPdOFZhuNldJGi3H+m5h/4B76G/ZV9SVGQjabfudaWDpJl55/143o9H4rPJECLJRfDBFQ8URJQpXMjk2SSCJzFI9esLH5QrnPOH7b3953HEr3/qKhXKRrFIQ14QIQSUF5jwi0OzyFyqivg99CBTc94Vzvj6F091H6SkfZB645OnOhNoKVB6Q3fB84S+pEfCA1zGEiL97NMl3RcOeLjRDxWxBPF5wFFKqgiorzdUFMNs1RFhZklvPNV5wLzYgU6cmpqKnZ1NXKAOhvibc1dIlby87fTnlDFAIxAw4fCgn/yxf8rDJc2BnkcVXKKrAWSQJJcDgJBQjw/hYnor+YKGoccZTc+Lx1IEV3loRI1DePf4Mg2QoHOBSl630YmVVvHhGNENNLNQMQvmFyMiFeWBh3ARj6I/F2F2/rebByFlA1RC8swlF9PLL2/KCJm82KasvXdLJY2Ym7ygkV+xXt3cj+JAcR+SyerDd83lh3Nz5YJZKtS+vKVYjgOWvOhRT8Kfb10GFY0JE6gj+b3xcoaPB0Ffuclzs2T9IQIZYgbhVxd4TcXy2TnmAv71z0leNJ+1t2ZJ/Pf3fnS+hnlJ3hyAs6QZtrYBXa1klGx8a9iVhmVrP9/ev1rNzeynabjWFIxUEYbpzrRqljWs1EiCasaqR2pf7kc0kD3MzfqsDibMjYMBOJer762AN2kFYHhpPFi8BEahkEByN5OrQ7I7bR/SWns1LTcioj4N+GlWDsmbrBROTkcnDosdxx2e+Eb91CrzLsSs9zq/EkYiNYMOEV8mz+1q+WV+MkvGJQZvENMghvnXEYkQG4/7HAHO3vMelsm5bRjGV+8KKDEA7PaJZWSvf9yUiMDHLKa252qser3+9/cLzVSVUaReu+0NphhuemOWfPnVuwK5is8MeTWaSRPuJNc/wUXHtAxq04ptOYzZdoXWzVIVKk433aOWWf0L5pYz1JImMxQRJhsYDiw1Tq6XfDpKG69ZNu1yBSNd1vJpsRd310Qag1zWwgg8QZ3PWY8wylwg04JMJmtHOyvbrdXLPXRyVYgBh99+/cH3Ox3W63T95tg7Nh/vr9aGh9tb1VYlsg7g8IBXH7daEW0/CTo9KQ6j+DAsWTViVsu1aq1uGDYxC0YB25asb3jllrUuN3ds2zg83tjqR+2W1SntPNo16pR93Pef+Gb7gdUblT92gw9XhOeuq634uBR2tz8c9Xfu70QPT7YPWuIgvL9a2ToqlQc7gxWMhiq3WVzWsDY54tvMW4Zgy5C0YcxGadYwy5qTYdAs3B6Py9omzvp24I2Xtb0UTMBP6sMeV9DcEQFc/w4xiIfcae625aZ5uq5Uf3DkrT8Odumj4/ajcIMWjh7cH4mT4cA+egAndSoe3QDBqJeIkeNQMaxaVoVz1/9Pr/50SG5OANKeckQyCYQMeK93sQcRNlByyTwROzjpI7hY3SC7K0fJ67rh2KxnMEZL3bJR6ZL77b0J9bCYhix55ZabesOyyvqy5tNmrYJ9k3HcLy/S4gv6X//ghw5VtKEhGzlIYCkhMqRDsjLqH8vtbfYRs4fjTfBjZ9R2ux/Fh+Ph4xi5TXSPccrkNwpzCi1kcwgFGM4tBajzLXjWe3mS4ECwiFElZi2lTAyUM+gojsza0JHWaOyp9GAsFfidHvoMUYiup7Z7YadaAqdKu7bFUpuuwMtT0uaBAyO9YSyhEk/RlG9z1qY4oLBRglTtnNpTVoceUjH6F8Sed7ake6KPA60rpxtLOhrn0u1gYMiNuRSSf07f2fLeve8OnHPsNrOfRQvE/nfEfrxA6w5oaZviZAHYXQBjNFgAdhfAWgu47gLXlEMXmN0Fs7GIF4DdBTAlHDpeQHYHyH62QOvb0Pp2gHSpRKh/RyCaR/lUR1D8UHWmTzX0Ri39ezRz9+2uiVgqoag330HBW1c7DijKvewZZ/Zcwnkri5dp7HAx3zh7j5WbCqZZwWj+iw7cyB5woqEwAoezWx4b6T+7NAn/4fjs7G1yP1lr76x/eu/eN5t/1lQ=
eNrtWE1v48YZbq57yaUoemSJngqPRIqULMkQirXkz40tx7KitYNAGM0MxYlJDk0OZUlbH7rtPWCQP5Cs1yoMd5NgF0maZnPuoX/AOfS39CVFrWzsNol7jQRB9sy8834878cj8fFkwIKQC++tK+5JFmAiYRHGjycBO4lYKP964TJpC3q+12wdPIkCfv0HW0o/rObz2Oc54TMP8xwRbn6g54mNZR7+9x2WqjnvCTq6/tMj1WVhiPssVKvvP1KJAEueVKuqzRxHqEtqIBwGyyhkgXr2wZLqCsoc2Oj7EpkCudzjIBXKgGFXrcogYrNVV/ipJbX6SOUecSLKulFiaip2djaxGaYQ4kfntghl/Oy2059jQhgYYR4RlHv9+O/9MfeXFMosB0t2Ca56LIUkvjxmzEfY4QN2Mb0Vf4F93+EEJ+f5D0PhXWWhITny2evHl0mACJzzZPyiCU7c38rvjQBdT9FzpVKu8MUQhRJzzwG4kIPBnws/Pf/nzQMfk2NQgrLMxRfTy89uyogwfrqDSbN1SyUOiB0/xYFbMp/f3A8iT3KXxZP63uvmssNX5iZGTtfh/eUtzeHII/FTCzsh+/rWbSaDESIClMSfas9mADnM60s7fqIXzL8FLPQhhewvF3BNRuHjc0gG+/e/JlnVfNZ8MMvif371m/MGJCZ+2Yq8JcXQlB0cKAWtUFT0ctU0q4ahbOwcXNUzMwdJHq4VyYYyzwbJzrRsVhQo1SBkshZJC5W/PAiwF1qQnLVZIUyIHXnHjF7W31gCL5MSgPCSeKB6ERv6ImQoczO+eoj2p/2DthrPp/WGRNDHHh+n9RC/TGvhdDw8pSSi1B6culplbBq8xyJivciu+IFIzIBDyA3jJ4Vi5Vl2MsvGJQSvIV1Dmv7tEAWAjcNdDgCnn1kTh/F5UdO0b14XkOKYQbtPTC19fX9TImAuZDGxPVdjViqV794sNFNlVJJX8dvbUoD1DTV6wQ2/eV0gU/GZFl4NZ9KI0/j697DoGr2KppcMU6PUxL1KzypRnRiUUFIwsFbA/4DccgJakmT6IoBkMwITS47i6yUXD5POqxl60ShBpCtKNi5aUa8hkhjCFcUPmCMw/ZxYiGBiMzQtyHjSONy9v7NVv2yBk3Uhjjn7+Ie3ftvtEqvbc2vto7Cs22vt9pHBR559WDzZbfpWl7a9wO4fNVe3Hw6CwkllVz/pI33ZNMCBwrKB9JyWg0ZCJxt0vDbmlSOnYzX8euE0NybuqGjpp51orbe+Vt+K3ms3C5vuzoPNjjPu1veG73n9xrjdKVjN5bWoXeocbh6tbUTbO/X9gK/vHnVaVmFja2/53YHNzNPcnjzZtx76/sEWhIilXcuvKFCwHECvZX2EoI9Q0kWVqj7rohWFpsDUcreH5oqyCQzQ9JzRitJKEGbwF7usxSWr7QqPXX8CwEQDTmvmgeGOnVI9t1pq5xo7D9bf6ZgHx/bYlmwXd9/t1+0TdzvoNPAmuYHMcrGItAyckmaW09Kcu/5/evXVQ3RzLKDmlDniiSdCj1vWRYsF0FXxJXFERGH+B+yivo727x/GLyqFcoUUTFKxsG5SvYRWm60JdqDCBiR+bhs1FYaPoa4oLq6VS9BMKfP9+SKpSK//w9u/pljiqgIcRYHWEpokQJJodSM41d0WbguzHO0f3t9e1Uxv1NmOAnG0Downeh/C6Mlu5ObEmkuHEwgQGGaSgc45eG9kTwRTwkTaMtLLCZFCoJywruTAt1UVyA5HjkwORqFkbtcCn1ngg+uJbcvv9ozlErV6PVpMbNoCLk+pnHuUDdWqtgRKHIkTFs64HMPUgu7xErVzwk+4nllA0OCfFznO2ZLqiD5MuV443VhSwTgP7S4EBoyZScFXgozU0+W9e78cOOfYbaZflhaI/XzEfrdA6w5oKZvidAHYXQAj2FsAdhfAthZw3QWuKYcuMLsLZiMRLQC7C2BSUDxaQHYHyP64QOun0PppgNRQCl/9hUA0j/KRCqC4vuxOH3Wo1XLy82jm7qtdHbCUQmJnvgOCt652KZOYO+mTz/RhBX0lC5dxRLmYb5y9wcpNBdOsQDQ/ogM20seeYMgPGOXklsda8ssuScL/OD47e5Xc9xvN3bUP7t37L6OJzvU=

View File

@@ -1 +1 @@
eNrtWktz28Ydb5qbTz31jKDtpUNA4Puh4WQoyZYlWaJiypHdxMNZLhbESgAWxi74kEaHuv0C6PQLNFbEjkZxknGmjdO45x76BeRDP0v/C5AiFbmNkZmcAh8o7uL//O3/BXOfTgYk4JR571xQT5AAYQELHj2dBORJSLj445lLhM3M0912Z+9ZGNDL39pC+LyxtIR8qjOfeIjqmLlLg/wStpFYgu++Q2Ixpz1mjl///DfHqks4R33C1cZHxypmoMoTakPdI46juERBygE7JGpODZhDYD/kJFBPHudUl5nEgY2+L7QS01zqUUlFuA/iSddigYtA0LEqxr5kPODM63JsExcB3eIKaObfTMJxQH1pJDBtgm5FMEVIa6RqHXj9AHwLBJU2AysRoX+TM4YIvkpuIFFCf+6KoCL2pROz5mYWchFQr6+egIbQw7ZDPXJTLvL4kARSKrjKnAGYZ5PvCt694r8h/ERi9CSkATEB8an1ixofz8VsTqUmEljvgGABa2SaVFqDnN0FJCzkcALSPeQu8Eq1wNQQQUikblgT5Cbr2arLYudiNKmHndAk3VDGxIxtYhNkQjD+52e/OLUZF9Hz6wH2OcKYQBgQDzMTvIw+6x9RP6eYxHKQIOcQVR6Jwzc6PyTE15BDB+Qs4Yq+QL7vUIzk8yUZFxfTKNSk4zcfn8s40MA8T0QvW3zs4TZY0tpY2h1DOnhKXq/k9fwXI40LRD0HwltzEBh15sfP/7H4wEf4ECRp01SLzhLm54s0jEefbiPc7lwTiQJsR5+iwK2UXizuB6EnqEuiyeruTXXTh3N1RT1f0GtfXhMsPYo+i/804k/KXi4S2MSBQ9eS5I++7RGBdJne+kJ668nJ/v2aciKCsYYZ2BD9xXg+A9khXl/Y0bNyqf7XWfb+4QzYRMifnsKBkn//azItEp+0t+ax8MvTNTjc6NU+MXOKUVbukJ5SMApl+GiUK41SVVnf3rtYnarZk2d5CYk8EktkIHcSE5cVsD2ANGiGwtJqX+4FkGEW+Hd7FkwTbIfeITHPV98YRq9kGIF70h+oURoZ+YwTbWpmdPFQu5/US21j7UUSsxoL+sijR3FMRa/ieBoejYYmDk3THgxdo35UKtIeCbH11ZQFio5UAwZpLo+eVY368+mT2WGeg/OGljc0I//NSAsAG4e6FACOP6dFm0enZcMwvr5JICBhobxPSkb875+LFAEUR6ivoHsuplSv1799M9FMVBFI6tX6N9epAOsFMfmCy7++STAV8YnBL0Yzao2a0eWvYdEtVyrVWr1XqALG1ULP6lWtnkVwr0CKFaNoVV8mZUcT8jB9FsBhEwwdSoyjy5yLRjJ7m8V8GagNY1mZFp1O2Ftj0ge+rPgBcRgyP8eWhhE0By0JyGiy9mintb2xeg61W1tl7JCSP71+591uF1vdntu07naQvrJl7m1VyuPdD+/dGRyMxmzYfRTuj93tQs0/qFr7dw9b1Y0PtHy1WAMvDKOq5XVDh7TXcP1JpT00vNtVZ9UYhit0xNuP+vbDjYebv/PWibU58jbcJw/WwnIhWHf0tRX04LBTDYatUT3o3tt3xoPtVn1lrby9Pqp96G95R8MPSm4LvEHCbi4tKxCbUPh5c5oyGqSMJhMm3yjMEmZZMWMMmvr1Grus3IXm3vac8bLSkWAS+AulvkMFae4wj1z+GTAIB9Rsruq2l39QWa1zr1cbuX5xZ2d/H9+l44PyPUP3Udja2lm5XdDXj7YXQMgXy5oxxaFilGpxFM5N/4FW/e2htlgBtHbSaqKJx7hHLeusQwJIoOgcOyw0oV0E5Gz1jna/9Sj6qm6YZWzl62WzYJXMPNZW2p0JciCYBjh6YRebaqNUKqrLiouatQrkTTzU/P4sabWvf9U1kUANBZoatFpVlkgMBVJrjfoH4cpoY8Me992Ng53d+9YQ+/jI3XyACtAyp3024Vgoqnpch4AAQ90Ssn1fgVd54zikQUEoaTK8arITg6MUky506wDooDei0JHtnI+5IG7XAptJ4IPpUrfld6sFYlZRr1zCUqfNgDkZ0qhnkpHaMHIgxBHx1DQdzhAUKEgUT4qdj3LxWGZBRwf7vNBxYAxwWB8KWo8nGzkVlFNud8ExLqecmAoGkekUEC9v3frpwDnH7vhjVc3wenu8knk2Q+ztEftYbWQxlgqxfXuc4ZUCL8UEbzPAUgAGr/QZYGkA4xgGtwyyFJDhgA0zwNLE2JB6GWBpAEMZXunwGqIga5RpIHs/QyvVpJ/LJv1UiGVvkmnQkj8hZYClACz+vS0DLPu/ih8PsRWCUcizKEs1iGVv3ynfjBDPAEsDGAuF/JnEnF73yIB7W+CyV/CUlYxmmZkKMIsSJ3sHTwPZe9lElgqvkwyt70Pr+wFSuWC++hOBaO7lsbx76vqim1yPUhu1mrxoMbP3artQzamCCeRc7eTz5dx15q5JBKJOfOsyvuJkXhHDUaDQpGy+cfIGNYsCknMBf/6PDNiIL1yCIj8gJsXXTDbkJRF5DP/j8cnJ1fF+tNbeuf341q3/ArgMR1M=
eNrtWk1z28YZbnr0qadOjyjaXjoEBRD81mg6EmXJsi2RFuVacuLhLBYLYiUAC2EXIimNDnX7B9DpH2isiB2N4iTjTBuncc899A/Ih/6WvguSIlW5jZmZnAKOhiQW7+ez7xfEfT48IhGnLPjgkgaCRAgLuODJ82FEDmPCxR/OfSJcZp+1mu2dF3FEr37tChHy+sICCmmehSRANI+Zv3BkLGAXiQX4HnokFXNmMXvw9se/OlF9wjnqEq7WPzxRMQNVgVDr6g7xPMUnClL22QFRc2rEPALrMSeRevosp/rMJh4sdEOhFZnm04BKKsJDEE86Dot8BIJOVDEIJeM+Z0GHY5f4COhmr4Bm+s0mHEc0lEYC033QrQimCGmNVJ0H3jAC3yJBpc3ASkQc3uZMIYKvkhtIlDicuiKoSH1pp6y5iYVcRDToqqegIQ6w69GA3JaLAt4jkZQKrjLvCMxzyX8Lbl3z3xJ+KjE6jGlEbEB8bP2sxmdTMffHUkcSmLVPsIBrZNtUWoO81gwSDvI4AekB8md4pVpgqosoJlI3XBPkj64nVx2WOpeiSQPsxTbpxDImJmxDlyAbgvHfP/rJmcu4SF7eDLDPEMYEwoAEmNngZfJp95iGOcUmjocEuYCoCkgavsnFASGhhjx6RM5HXMnnKAw9ipG8vyDj4nIchZp0/PbtCxkHGpgXiOT1Mh8EuAmWLG8stAaQDoFi5MvlfOHzvsYFooEH4a15CIw6D9P7f5+9ESJ8AJK0caol5yPml7M0jCefbCLcbN8QiSLsJp+gyC8XX82uR3EgqE+SYaN1W9345rW6oZk3DPj74oZk6VLyafpRT98pez1L4BIPdl0bZX/yjUUEysv8zs/kd360tX+7oZ2IaKBhBkYkf9ZfTlD2SNAVbvKiVKz9ZZK+vz8HNhHz52ewo+Rf/xyOq8THzQfTYPjp2SrsbvKmHQc5xdSVTRQpBb1QUoxqvVism0VlfXPnsjFWsyM38woyuS8WyJFcGZm4qIDtEeTBUiwcrfrFTgQp5oB/dyfRNMRuHBwQ+6Lxzjh6I+MI3JP+QJHSSD9knGhjM5PLXW17VDC1jdVXo6DVWNRFAT1Ogyp5kwZU77jfs3Fs2+5Rz9drx0WTWiTGzpdjFqg6Ug0YpPk8eVEwKy/Hdya7eQHO65qha7rxdV+LABuP+hQATt/HVZsnZyVd17+6TSAgY6G+D4t6+vrHLEUE1REKLOieiinWarVv3k00EWXW5Mv8+iYVYD0jxij4/KvbBGMRH+v8sj+h1qidXP0SLjpVVCbFAq6ZBVzBZceqmCUL6zWjWnJsHZvl16O6owm5mSGLYLMJhhYlBslVzkd9mb5LplEyy+DpojKuOu3YWmXSB76ohBHxGLI/w46GEXQHbRSQyXB1b2t5c6NxAcVbazB2QMkf337ws04HOx3LXwp7+09rT9e34g3mbjzZfdAvR3zfPu4+pCttcRS4je6m+9s1q7f9+LFmVIomGFCoFDUjr+chEbXGRmllgJ32mmPbe7y206xt05b7gNWwVT28v7O9vJ+/134Y2jvosF+NdvmjxjoJei2IX3ps9VnPbTTMsGU1reW9Nd7xEXerYdHLuw+PO041au9urj1t6WwwoM5dcBEJd2lhUYGAhXbAl8Z5pEEeaTKLanVjkkWLip0Cs5S/WXkXlXvQ8puBN1hU2hJhAp/QANpUkKUtFpCrPwEw8RG1l1bDR4/u2xsNGljL7qp/7FpGP79+t3XYerJTNuP9irm3vFo1tvKt3gwyVaOi6WNwynqxmobm1PTvaNVfd7XZsqA1Rw0oGQaMB9RxztskgqxKLrDHYhuaSETOG2va9vJe8mWtUK3hQsms1nS96lQtbaXZHiIPIuwIJ69cc0mF4mOqi4qPlqplSKZ01Pnd+agBv/1Fx0YC1RVoddCAVVk3MVRNbWU96hUeG6JS8Fce3yMP295qNzj01+8NNp0+NNJx9x1xzFTafFqcgABDMROyqU/Be+eQpEGVKGp6RTOqsj+DoxSTDvTwCOigY6LYk02eD7ggfscBm0kUgulStxN2LLNSth3LsktSp8uAeTS60cAmfbWu50CIJ9JZajyyIahakD2BFDsd8NJhzYE+D/YFsefBcOCxLlQ5i48Wcioop9ztgGNczj4pFYwn49kgvbxz54cD5xS7k49UNcPr/fEaTbkZYu+P2EdqPYuxuRB74g4yvObAS7HB2wywOQCDB/0MsHkA4xgGtwyyOSDDEetlgM0TYz0aZIDNAxjK8JoPrx6KskY5D2S/ydCaa9LPZZP+XIhlT5LzoCV/WMoAmwOw9Fe4DLDsfxXfH2IrBKOYZ1E21yCWPX3P+WSEeAbYPICxWMifSezxIZAMuPcFLnsEn7OS0Swz5wLMocTLnsHngezn2UQ2F16nGVrfhta3A6RywUL1BwLR1MsTeSLVD0VndGZKrVer8qDFxN7r5UIlpwomkHe9Yhil3E3mjk0Eol56FjM992RfE8NWoNimbLpw+g41swJG+wL+/B8ZsJAewwRFYURsim+YrMtDInIb/sft09Pr7f1wtbl199mdO/8BuSY6dA==

View File

@@ -1 +0,0 @@
eNrtVU9v3EQUV4/c+AiutxKXzNpjr+3dlRBKWrUpTUkKgfCnaPU887Lrrj1jZsZpNiEHAhJctx+hiRIUVdALR245cOALhE/DzGY3CALiwKUSPdjWvHnz5r3fe7+fD093UOlCihsvCmFQATN2oZ8dnir8okFtvjmp0IwkP95Y/2DzqFHFxcbImFr3gyBHriQbE9UIU1TYbjRB0IbQNlSwJwU81W0mq6CSHMsAhBkpWReszUpoOJKdKCjEjhzjcS755OLLff/KZTBPyu97/uKWKIxiEiYkpv6S51eoNQxRW4/P9n0lS3S+jUbldpm0tQjjTJtYll6FHnhP7FX+wefuMOwOjF0Jd5yGUefgdITA7Z3fniwzhrWZnr8JdV0WDBwcwRMtxYvbl0HJ5qTGv9k/+9BeTpaH1mV6/mxFGhkHtB2nbRp7FQ+cgUmFrYWtgSBqh57UQQVM6lYUt1O7tq6g2KgFqko7XgliGNQT2wDRits0anedQz0pqrps3d6YbXhsexgoNGpCHNKtEofAJt7iwkUSZx+T5WqP3AHjsrdoJmEUJpv2SZPepz+B3dN8TFxLLqsiBZ+e30qjvMtitk26OaWkk3YY6fXyjAANgQIkEKfbPy4Oz2dmev4GGINVbd6mPyxgW0MxNKPpEY063yvUtZ0y/PpEGzCNPjxmNvNffzmdt/X5+oNFR767uDlLXJCV+SDc/yPDNVuMYJPpUS+lx7PKft5CvuSFiXcXc88VaV/9JOuHoXfv4ea1WOuNqRvbUjcM5La0gzx9HmXXco7D9MzaBM7YMT0bI9YEymIHL7xr2V0LSOOXuwSc0/uX+Nzn04tbYZpF0LVIctpNSCfKYtIDYATCBLpJTHudlP955l7+deRmxPnKYqgKMfztRrTvF9zv+5UeDnKuxoOQpqvvPVp9KuTK+JO1ZDt/1L3X7O1aAvnGBnSul3hbwyWFfNC6sD2x3FnyZ7y1tjlf7bRa4xW1LO3mQQzuOvfZp++vosK3tOWbLspyMmNd33ssHoutERiPS28iG4+BZSXY1dCrYFiwAsQ73rId99wqEHD34VLddHT1tZH1QFllcXrgo+AD0yjhzze0g1Qwm4doynLJSoCrp2+hcI24ojmNl3w5a/aVKcoODl7L3P9H5rIw/GeZyyFCyhIgeZZHpJOnnEDUoyRmSZhsdzLOu/RVkLn4X2SOvmoyB0jRYhESlqH9f8S8Q/JeRkkCFDEOwxCz6D/L3GoPa3V3vDfZqfDBR+8+3E3k1p3ktcwtZO535liL9g==

View File

@@ -0,0 +1 @@
eNrNlwlwE9cZx21gEkjCQMgUHNKAIg4T4pVXWkmW7DjFyMYWtmSDBD4CiNXuk3axtLveQ5JtaAImJAVasoQCpQ0FY1tgzGHMYcA0ZAgESBjKMaSGFkKaQCGBTHDJMTS4TxeWY46ESWe642v1vu+9//uO936eF/QBXqBZJrGJZkTA44QIX4Sl84I8qJCAIM5v8AKRYsm6okKbfZ3E0+1jKVHkhPTUVJyjVSwHGJxWEaw31adOJShcTIV/cx4QnqbOyZKV7VOqlV4gCLgbCMr0l6uVBAtXYkRlujIPeDysMkXJsx4AXyUB8Mo501OUXpYEHviBmxMRLYt4aYaGVoLIA9yrTHfhHgHMCVIAJ6H2JXUUK4jy5u5qtuAEAaA3YAiWpBm3vMldRXMpChK4PLgIGqEGBoT3KjeWA8AhuIf2gYaIl7wV5zgPTeCh8dRZAss0RTUjYiUHeg43hpQjcIOMKG8vhCKyzKlFlTBsjEKt0utVmq0BRBBxmvHAOCAeHOpp4MLje+MHOJwoh5Mg0ZTIDRHnzfE2rCDXW3Ci0NZtSpwnKLke5716bUv857zEiLQXyEFTUc/looN3lgtiKrUafjV3m1moZAi5PhzzXd28gchXIgQLJ5HXoptjAfIAxi1Scq1Bt54HAgeLANQ0QC9REubVwVyADw8Ho9VQW5gfS+L5hCF12TAv8j6bxKQoMFRhwXmFBtXoFGpDulabjqUpci32JlN0Fftd09Bs53FGcMFU5MTSHiQoiSkHZKPprgnfF0o43ExIPSxCBAQ4VgBIVJXcVIJMjrQBYs5uiVQXwvJunKGrwsvK+8KZ91cF/CQhkSTl83tRY5UWo51AIlzboy4cz4aWgYIQryCv0+qMm6Mjsdg3wr2iiBpFUPWeAMLDUHhoLw3DGf4Z7UVBrtOhKNra00BkywEjyEEtGn7+Em/BAy/MWWjtrmm0RqOx7e5GsakwY+jR7eluJYB4NWqNV2jtaRCdohYVmgIxa4Qm5faR8MWBulxOQkuiOIYCAyBdhNFJAFyXRhgNGACkdjfsc5qAs4SSybG8iAiAgAePWCm3p3jxQKjPMjG1DtPDnWYoaIbwSCSwSc5sNrQHIUPB8cDD4uQWwoUQOEEBJFJ/cjC71JplMZsabVCkiWXLabD0bGKSw0G4HE5vppHhzabikgqppEQzNcD4vVMrCXdFtoH3STmUzyIV27XFWosjN9smIOo0LQYFaNLSELUKVcG2QQq9Rj9nLqe86oDFgvpdphyPj5vqoqbkFBWIVJaLKZLYEs5e4vIDvTtbS5G5gJvlw1m/0VxFAkHK95ZQhQ49mT25KI2UDEUGdV5xQY5jcgCbWGGmzY582gJyLCbaypbCLeIilZmaoYAFS8OgZ0bbBoFtg4SaxpiujjVNhoIMByZT1f2IzFDkwYO8kPFUZihsoQgD+Bv3Ahstgkwry4D2ZTAwko8mM0sDuMSaJLeBT+Ml0xSbKJo1NmDG/LNM43HKrtLpbJjajFL2CaVxkdHDikajwdGjWkO4NLukP6SqnSVI/CmAFHLhq0YOMqzA0C5Xgw3wsKvkRsLDSiQ87XnQYJqATM4qlbcbNQYjodFjBqc+zeAy6JHx8ByNzXbnzKgLXRVB3AMLz0fILRSWqYRHEKbMUHjxTIMe9lj4XpvbECpUxn0wcfvwRX0Twk9v+N3ZuXiyhT2NDtx3uXilkS36ep2l/fFBg4o+frXM+otG5ZbMI6vSh32wumj5rM4MftveI+NfqDiCn29b8c3qy88/1Wvxgh0JO37fVMZ2+Icf+OfBY49ueenCpIpb3tsNi64N6eSEW7e+L+7X8t6aYwPss8l9U09XLfIc7r/1sTFPv3W1bRqyYsXB2pTqNaN+qz8x9Ph3C2+cMbyVuvDg2NSjs38tzWSPjFm8dfAF7Ln5gavvHAme+vbxM7Urn01GDo15Zv5N2ty3L3mqD7l49DbxiZYn7Z8mTWe/H7qpvviTVcNeW2qdPah31hfrnyoou37e+pvEz1VHyfPSGWvClV82F1ys/uCbR8lXNPSGr+bqv0ode/rQH1o39Tqap3p6YcGwKvHP/SourDjx9me9h6+/OU2dObNz78as/oeGJDV/tzZ5ppNO7hhxpaDp5OGTfzNcGbTM/vLpEULKDN/Zv/Y5PFru1HX86euB51ovvfJp7rjn/714z43EHZMuz31/x6tXXnKUZUz5+zML3rgYnFTS/O6bq7MIJCNjWv/3ppOejWdbH6kZ4DK8fbb4j6Byw+fHJjjG7Tx+O5ya3gnW5n01eTBP9yGcUfGEgzMixbMcTcQgJ8YyEbghIdzgAUfkyFOmq1GNNqU77sTDTUoP+InHHcKDw5MMwRAdQuF0uYSEUEUQlXO6rsknI7TT3EPZwwFPN1gIwQArifIGa6HdkWuemmP9OXhoV1ZMZgyJUJXWqEIfEokizv83SNR8JwndrnIMQfXwKv8JwLROjaI/jZiSHkBMhv8JMbUru3Ycf+9HkCCCI5Av4EnfPvq+lneIQ64LocYD5g2Dhbw9tD8ExRAMtce4sOzu69AMJ8VIJKqqIUJH7WMfaN+lLeYz+kf43EOhoaw9+W7esNV6SKwPX5LtLzzYvkti1Cf5x/jcW6Libu4/CF9koZH3sYwPXMRacV/re+ppjMPGtjA1qida850aqpwrcJVaXSXuHGKSNptU7+6aPx7MI7iJGfWYjsScCHC6SERrNKQhRqNGjTg1GgOpNUBUIfXrfDQuN8ImV7hZ1u0B96TG0D8DrJOFxWjHYdEykEV+BGuk4UYS0xLgfqzxA5wY2IUT5JKs/APjBr7WeTT9cv+RJ9b8Iz+rbuewtoVvzMf9ecsbWir8y4dOvL23rqzoxf5tX1YPuNTn/Ny5pqtntzUWb9ll+ez94ArmZlu7dsCJ68O/zKopGIEMrp//7NwFSQtG7V82exCZ2P/1j6mE3r79X6xvVa2qzX3neM70M2lTSj9a8+4jg52JHb49nsfmqK51qLfenFi1UKzfeXE9unTV+KmqwPVlfV4/VXbt5K7aN5OvDNlqG1x6U7vYfetf9Y4D/9n8nDnpXN7KXjWJzo8m/u6JXylO5I7e2WdBB0odvnzhw34vLikt61f9SZ9L1pqcc7byEWu/3X/NMWNGYKTYWXr4RsaGjZW3e0Wu6taTrt1iYkLCfwEe36Og

View File

@@ -1 +1 @@
eNrtVs1u20YQbq9GD730rhI9FVqJlKhfwyhs2a4Vx5b/EP8UgbBaDkVaJJfmLmXJhg9N+wJ8hCaOFBiukyBBm/6k5x76Au6hD9En6FCSYRk24geodCDE3dmZb775ZodP+m0IhM29j89tT0JAmcQXET3pB3AQgpDf91yQFjdO12qbW8/CwL780pLSF+V0mvp2ivvgUTvFuJtua2lmUZnG/74DAzenDW50L7eOFReEoE0QSvmbY4VxjORJpaxY4DhcSSoBdwBfQwGBcvI4qbjcAAcXmr4kOieu7dloJWQA1FXKMgjhpG8BNRD6Px99empxIaOLm3BeUsYAj4PHuGF7zejH5pHtJxMGmA6VcIYgPBgkG521AHxCHbsNveGp6BX1fcdmNN5P7wvunY9AE9n14fb2WQydYIaejN7WEMRsNb3WRd68hJbKayntVYcISW3PQSKIQxFPzx/s/za+4VPWQidkVJOoNzx8MW7DRfR8hbLa5g2XNGBW9JwGbl5/M74ehJ60XYj6lbXb4Uab1+GyKS2TKr6+4Vh0PRY9N6kj4Ocbh0EGXcI4+oh+UHuM85YN0eW/9Toz6w13puvsa3tblWJ7Z2W5UM0H+jbsbNuFvWo1oLUDr24KvhOEO35GLxKtkC0WiiVVzREtpaYwZbKw6GSr+oJYWs3l1J39xeVmUKvq9czq+oZaouxR0z1wtdoD3exkH1new1nuWAtyOdzP+I2Vh53m6txqsHa4sl3l2/5cJb+8m8m2Vluz0wlEF7ZtY2ajJpa0owUpm61dZ2HP26Dr+7V1f5Gmdh/Mdfhhu5XbfQCHJcrXx+CppQxRRwjzql5U49/FlTYc8JrSip4W9RcBCB8bAL7rIWMyFE9OUYbw15/9USc8rS1fK/iz03mUZPR+G4xkQs0lFqGRyKiZHD7KuXxZzyW+Xtk6r4yibMUKvExI6Mg0tOOVYWNMJ7D9AgFyJpQmKb7eCqgnTJTlwlUL9JkVei0wzip3iv99LH6sbJwOdiSBjs8FkBHM6HyHbAzvBFKdfzPsNMKDJvXso0EnRO8HXXB41Dk0WGgYVvvQVUtHetZuQMjMt6MjfsDjMAiIuCJ6lteyF6OdKx2eYfIq0VSiar92SIDcOLZrI7+D5+hiEtFpDsl/d9tA8hbgFdbXB9VR/xi3CMBFAcexr93opVLp97uNrlxl0aRUvIkGSwzjaLSMK97dNhi5eKqK886VNbGN6PILfKmbmp4paEUwMoWCqheYWco3IKfpJWpmsmYRfsHa2gy9xMX0eYDFBoa3sOxGl0mXduI7Zyar5bJ5zHQ6YXvMCQ3YDBvzPM5BTCf8ABxOjZfMJIwyC8hQkFF/fnd1dqVa+WmHjCuL1PzhBOh7XHi2afY2IcDCRGfM4aGBl2cAvcoi2Zjdjd6WVCPHTC2DCtGzar5B5mqbfeogyDaL3ljZGaWs61llOuHSmWIe6zEYCN/24qS85t+fvDCopOXEsWIbeNvH04Ph7CCznea+PJBNL6ev+0eudrTr8xKbLxZae9VNHAS8sY/qHZ1IXc+b1EDfaMCwHySgz+vOvXOoEBSaTtQC0YrxfMFEbQZ1aeMYKis4KWjoyHijKyS4dRMxQ+Aj9Di26dcLGTAKtJHTWRzT4nh4OOFsz4COUlaT6MSRVCkfX404isLHAnix2+s5GI9AMENBEZ8XOs5JUnF4ExulIYYLSQWD28KqY2I4bkZWj0+mpv4/DF7TtTT4bJiQ9EGSPp8Q9GGCEkv8cMLRPRwx6k04uoej6oShexgaTr0JTffQ1OXhhKN7OJLcoN0JSx9m6asJQXcQdD8nipDcV8ZY+Wa+trrweGrqP/aerWg=
eNrtVs1u20YQbk4FjB566V0leiq8EilSv4ZRxJL8k9SWY0mR7SAQVsulyJjk0tyl/gwfmvYF+AhNHCkwXCdBgjZNm5576As4hz5Lh5Icy7ARP0BFCIK4OzvzzTcz++nxsE19bjH31onlCupjIuCFh4+HPt0PKBc/DRwqTKYfbZYr1aeBb519awrh8XwigT0rzjzqYitOmJNoKwliYpGA355NR26OmkzvnVUPJIdyjluUS/kHBxJhEMkVUl4yqW0zaV7ymU3hNeDUlw4fzksO06kNCy1PII0hx3ItsOLCp9iR8sIP6OHQpFgH6P9+9uWRybgITy/DeYEJoXCcuoTpltsKf2n1LW8+plPDxoIeAwiXjpINj/co9RC2rTYdjE+FL7Hn2RbB0X7iEWfuyQQ0Ej2PXt0+jqAjyNAV4ZsygLi9ltjsAW9uTImn0/Hkyy7iAluuDUQgGwOegTfa/2N6w8NkD5ygSU3Cwfjw6bQN4+GzdUzKlUsusU/M8Bn2nbT2enrdD1xhOTQcFjavhptsfgw3VOOKAp9XlzzznkvCZwa2Of3t0mkq/B4iDJyEP8sDwtieRcMPtz5vNIjRaDqLtV2eVcxSrbarWj3X3Entb5Q9o6HXXN9s7ZaX7my3/eR+bkPZbyElo6lqWk5mVKTE5TigQPsrer/Ut3K7dt0oeoVkJ94nTi9lKJ16UGoulwprwf1aObnqrN9drdv9RmGze99tFfu1etIoZ0pBLV3fWd0trQR31gtbvrW8sVuvGMmVtc3MvbZJtU58U+xvGdueV11biAHkoG3pi1pVdfp2uhBfStfixfW7y9/Xteqe2TcF3cCNe62Cue/c8etFvEqmMGdSKSRPYKdlLStHz+l5y9jUbQkzfJLVnvuUezAX9McB8CgC/vgIupP+8/dwMiBPyncvGvuroyJ0avi+ErjzMVWOrWM/lpSTqZiSzWtaXtViK+vVk8IkSjVqzLOYoF2RoO1oZTwvCzGYSp9TsRgIA2VfVX3scgO6tXQ+GUNiBu4e1Y8L187E+2gmoN5ROjCoiHY9ximawAxPttHW+KpAa8XX4wFEzG9h1+qPBiR8PxqOTr/b0Umg62a748i5vqZaTRoQ483kiOezKAwAQg4Pn0Kip5Od8/Y8hjUZKTKSlXdd5AM3tuVYwO/oe3Jf8fAoBeS/vWog2B6Fm22ojaoj/zVt4VMH2jqKfeFGy+Vyf15vdO5KzUVP6t1lK+B6yo2SdPjbqwYTF09kftI9t0aWHp59Ay8NRc01ZZxLyVmiqpkkTidxU1W1XDOTbsop2fgdamsR8BIV02M+FJsSuJxFLzybd3A3uooWVSUF7SnLCzHLJXag00rQLLIoB74Q83xqM6y/IAYimJgUjRsyHBZ3Nm6vrxV+3UbTnYXK3lgYhi7jrmUYgwr1oTDhMbFZoMOd6tNBYRlt3d4J3+SS2RxJaoZBqKHpShotlStDbAPINglfm+qiBP2rSgsxBy9m01CPkU78MIiSclsfvniuY4HzsQPJ0kEEIlEhICloacXvJMtG+x7WgqVKLlvJPKo7Rc6SZZhj0AfWfATdOzkRv5Ch+Ki/wYDAPAgKPj9Ornat1iBoNA3JGaRkI9mBRC1CG8ICdcpLICA4sEW00eOCOg0DMFPfA+hRbMNrNNVMWjeaTT0VxTQZHB4Ln+XqtCvl5XlwYgss5Q/OlQ9D40MB3MjthTxGykiNgGPA5wa2fTgv2awFg9Lk44V5CYJb3GxAYqBCE6uHh3Nz/x8GL+haHf2bmJH0SZK+nhH0aYJiq6wz4+gGjgh2ZxzdwNHajKEbGBqr3oymG2jqsWDG0Q0cCabj3oylT7P03Yygawi6mROJC+ZJU6w8KJY3Sg/n5v4DB+yupA==

View File

@@ -1 +1 @@
eNptU39oG1Ucz5g/NqeuTFFEZPGcyrQvubRJf4QJdklrZ9cfayNbJ7O8vHvJ3XJ573r3LjbtZm2nICvIHgiDwQbaNKlZ3VLdrNP4g3VCtWX/+IuhiOCcSnX+IaIrjvqaJrWlO7jj3ff35/v5vMFMApuWRsmaMY0wbELExI/FBzMm7raxxV5KxzFTqZJqa+0IDdumdulhlTHD8rvd0NBckDDVpIaGXIjG3QmPO44tC0axlQpTJXlJ6ZPisKeL0RgmluT3yBXecqkUIvmf7ZNMqmPJL9kWNqVyCVExBGHCoGJdp9LBfSKcKlgXFqRDW8GgEqhQi9mgQtSSK+Vq6WBGxVARKL53lKVUajE+vmqy0xAhbDCACaKKRqL8rWivZpQ7FRzRIcNZ0ZfgAnSejWFsAKhrCZxezOI5aBi6huCC373fomSsOCdgSQOvdmcX0AABkTA+UVeaw92WFIskTtnl9bkqcj3AYlAjulgG0KEYKW0U/B8sdxgQxUQdUCSJpxeTTy2PoRYfaYaotWNFSWgilY9AM17lfWe53bQJ0+KYZwJtq9sVnf+3q3R5Klw14ysKW0mC+EgE6hYeX1ryUkpW8FIJ5CogeyZWlMbMTAJERQf+unyqtEAdkyhT+bBHlkdNbBlCe/hQWqQx2xpMCbLwzFSmKJg3WptKVA+lgoI2/uFurJQ7ZZ+zAYedorFPfPy+Kr/X63yqOTQWKDYJ3ZCl8ZAJiRURTNWXVJFBqk1iWMkGbqiHbPFOAE3heXHukj3V1buNnb5wc4tqqI17O3e172nE6vPDCQ3yrMflcUYpjer4NIoABJGKwSIyngl2ttQ17wiM7QHtNEyZBUIwylOEEpzuwKZYJs8indqKkKeJ04EG0F7Xyc/UyooPRWQIw0ptTQTVgu2C9RLKJRSpBW0X7t+A2KQpTBf+2Ty0zlF41op3fl55ta5p8smyl+c/9/98R+rNL9++l549MfIox989cOzI7zuPHa9vmuo/GtpwOHPrH30H8uvfX3dny7VfZr+abri29erc3+9u/HbyyoFXPMGbN/Tep3buutizJSc/vfe2mXPTh/N6y6atA4eO5ILZr58Z2teVTZOrs/Z1xz2DuU2Pv/fNr5HQXH7uqJHLvHb54m8/Td6k/Kl/dP2K9OloML/t/L+RB0/e/sKWmfM/vFjTCx6a3DjFXBcGPune8dn2c88l7mLb7v7rsSe6b5k4WyafuTzZOHo8Nz89e//m9T/2P1LfX30ycuLjAry1ji/M0P6mNQ7Hf3JxBh0=
eNptU21MHEUYPtpgKL/a2sQ/Bs+1Kkbmbu/2gLuLpuKBSJGPliuBNAbndudul9ubWXZnL70iP0obf/g9xpj4FW3vuCsnLUWbNH40Ngi2GjVGQ5UoJrURxWqJNcaYptbhuEMI3eyPmXnnfZ/3fZ5nhnNJZFoawWVjGqbIhDLlG4sN50w0YCOLHsomEFWJkuns6AqnbVObvVOl1LCCbjc0NBfEVDWJockumSTcSY87gSwLxpCViRAlNasMCgm4r4+SOMKWEPSIXl+NULoiBPcOCibRkRAUbAuZQo0gE94EpvxARbpOhKFH+XWiIJ2fyDq0FQQkoEItbgMvryVKYr0wlFMRVPgUPzi2ZFRiUTaxrrNxKMvIoABhmSgajrFjsf2aUeNUUFSHFOU5LkaF0Vk+jpABoK4l0Tv7gEWhhnXeMKBaAhGbstH2jnBfc0t3U3t2uSg7AQ1D12S4lO7utwgeK44BaMpA68P5pWEBZwBTdqqh1Ka7M8V5xk7R5Qu4xBOroXXIO84ahfj7qwMGlOO8DihqyLLLycdX3yEWG2mDckfXmpLQlFU2As1EnW/NlKaNlwZluVDnerhicAUuJ7k8Hv5PrKlspbDMRqJQt9DEiggrOXmumwTEOiB6Tq2pjaiZAjLhEOyweLzEoI5wjKos7RHFoyayDO5NdDDL06htDWe4mOizc7mioY50tJas8HSmkcvKTnfZuMYpic42aDo5cK3T4w/6fEFJcja3hcdCRZDwDWWaCJsQW1EuVVPJNTlZtXEcKfnQDf2SL74ZoCnsA77uEz1d/TFv085mYnfv2rUnNdBrx5v7tfZ3/+eFmDGItf0F2KW82e1SoE6qVaQIQJGoAnwBfz0IBLweEPF6/YrP76n3KXXppAZZnnPvjBES09G4HAUylFUElqlhucbe9oa2ltBYD9hNIoRaIAxjLIMJRtkuZHI1WF7Wia1w/5soG3oI7G7oZScDXn9A9vpqoRLx+6P1UfAg902JphUaMkuPp/DAD3ApTH409fdtT1U4Ct9G5dm5nVPitkN9I1/YlXPnd5h3VD9c/s3eLWzzCxexWjf6yab4PVcvfzVTPv3WgX+iT7x8oWZ7MpV+ZUbqewR/P754/rm537/MLfy0cOVa1Yb7njmZqY6dvr0i88stoZ9/8146s/nbptaqrR9fioxe/vVYMjT6XfZ+8t585Ya7bsqeeX78T40tWAtVTUfuXrzYO3N2CNGjVfnFa/1Pjvyo/kUPfvTqA/P+fGPwrGNTZfdAS9kO59fNhx+r8G5bLP/0j7dfuzDZ0zNwa7VyLl6W/vylF1sys9Nv/ntlfupm7eob05Mfvr5n0P94mcNx/fpGx9bJ+L2tfP0flrQqIg==

View File

@@ -1 +1 @@
eNqdVXtsU9cZT5qtDRVqR1WVtgxxZ1HRh8/1vX5fZ2ZKYhKiYJzaDiRBNDq+99i+8X3lPhI7lD3CJDatr1sQLYVCWzt2m6RAQqCUNqjvBoa6jU1rXa2ok0bY1oKGEF27bmHHjjMSwV+7f9zX+c73/b7v9/2+M1joQ6rGy1L1KC/pSIWsjj80c7Cgol4DafrP8yLSkzKXawtFollD5YsPJnVd0Xw2G1R4UlaQBHmSlUVbH21jk1C34XdFQGU3uZjMZYrKFouINA0mkGbxbdpiYWUcSdItPksUCQIhIgISPXIKWawWVRYQ/m9oSLVs3Wy1iDKHBPwjoejAKQORl3hspekqgqLFF4eChqwWHYkKRq4bKt5LkdTWQhJBDqf1ZC4pa7p5YCHQg5BlEfaHJFbmeClhvpoY4BUrwaG4AHU0jOFJqFwGcziFkAKgwPeh/Owu8xBUFIFnYWnd1qPJ0mglHaBnFHT98nApF4Bzl3RzIoRB1LfY2jK4ohJBk26apA+lgaZDXhJwiYAAMZ68Ul5/Y/6CAtkUdgIqbJn52c0H5tvImjkUhGwossAlVNmkOQRV0e08PP+/akg6LyKz0Nh2fbjK4rVwDpK2k96xBY61jMSaQ2USXluwGelqBrAy9mG+SB2Yq4+ApISeNLM0bX9ZRZqC+wNty+NtuqEN5jAX6PRUodIoL4Va50g8W7U0F8C8mJMbEWclKBfRhGKEnbK78M3ncvucXqI5GB1trISJ3pCGsagKJS2OqVgzR3uBTRpSCnHDjTckfLJEOM6mBB+3JUBpRdYQqKAyRztAeFYhoCVweLa7gKwmoMQPlMOak2Xm+wfS/RxrcFyyr1+kmAGng48hg41PVLYoqlwKgwEBUTOzDOM9UFmZq/0wzpUCNAUo+nga4D5HAi/yuJ7le0WmmplzURR17HoDHSsLC7rgpMrXifkWKhIxaaXY19w4GYZ588ZGc64c2ITxMMcXWmloPhraLmrHrjeouHiJ0kbTc9aA58ziSvzRDTnG4XAih9PjgN64K+amXSwd57ysnXNDxsm+jpXPs9hLiUxFVnWgIRbPJD1jFq0iTJd05nfQLocbZ1pH8BIrGByKGLGAXMpBqyMUFQky5A6yccBCNonAbP+ZhUDn+vpgS+NwBINslOUUj57+tLqmu5uNd8dE/7o1VHOID7X3hqk1XeudKY3NeGLd4ZBmGC1xJRBq6TEeHpC0tSovA9rj8Hq8DEV5AU1SJFYpcDEpb5ILrW00WtJNVKC1IRTs9darQW8kJnUE6Hi/t51R1DUBt0RFpai7K6FCUW/qc3Kcm2v3BrsMkmwWtCZBbpd7Okl7Z1v3OjvVj7OBetJvqyNwb/K4vv6KQgBWCCjpg/bZ5/RRR3DlGvjJhdOwjliLx3lIEjJ1RKRUTISfUEQRXkf+9bKEijtxDYw+nvNT9UxnbzoQbWXCSTfk0qlA2Ej2t1MsGWxIe5L1altTfVci3GNXEvOK4KY9gKrUwU05veUuvAb9/0R1tAPMFzwIKbPnVkGSNYmPx/MRpGIBmcOsIBscHuwqyjc2gXB9pznBUJyLjdspDrkYbxzGQQMemXPe/jcecqVToQAF3GN9rHk46fBbfE6nw1JHiNDvdWM5lU+3n+VLPSkl3q9+Z8WvaqvKV81j4bef+D31vcnzD928f9nOXeTfvhr8fPA7Ny0ia8dGpo6sOml9Qizu0DeOzNSpZ8TxmHns63+cPX15i+Pys9W1Z2K3Nxx5uv18cGbmxOvLVs8Iz01v+f7Z20Z/8aM9D927evmJrd88uOGVd5dfbLsU+Th1zic/dbL4yw3V97Hhzc/IfecuHr//lPnXvU07Pr33N9/88dzyzlOe9+N7lqITZ59/Y1PzrdlP3j24uOrz3kd3hEYuPfLlix3+Hyz/4P5ld37buuSntR+tDEzd/cD4hHWD45W9aCJ7ZcWFmuSZt0BD1P7CrRdrm8cJMJVd/K/PHp8cW9F2crVlorb5nrev7A24ijdPNbQu6WnIdoOfLOr458P53FHP9GP7yaOPpx5Vtqenbengrtim7+78XZGjxy/HuNA9b13w7pvpjP152xc/3Pjlj7PBwS+urFw1PWSd+mw39+GZ019PD1wZ7Boy79t97O6DbeEjL5//KLN0/OO/3BU4OvXv13Y/M1H4E7fnqrl/0dBvHX8/dcsfcvtu2ffrm/I7/rOxiD7IwiX04tyh2DtPffXC2K7J3be1vLd96pN2jHLb7c7NkQsj6rbjO7ePvHpH8YFva6qqrl6tqbq0btXWTfj9v/DeoiM=
eNrNlwlwE9cZx22YFkqGFqecoRkUTTgKXmkvySs7SrFl4wvbsmVjyw6jrHafpEXSrry7smwzHIZAIECchUBCSLhsLGJcYwoEwmFgQiBNOjSBJo1CoECGMrRAM4UmENLQpwvLsTmSaWe647G1et/3vf/7vve99/OCYB0QJU7gkzs4XgYizcjwRVq5ICiCWj+Q5OfavEB2CWyrucRS3uIXudBklyz7pHStlvZxGsEHeJrTMIJXW4dpGRcta+FnnwdEwrTaBbYhVDFb7QWSRDuBpE6vma1mBDgTL6vT1XnA4xHUqWpR8AD46peAqJ4zM1XtFVjggV84fTJCCoiX4zloJckioL3qdAftkcCcoAvQLNTe3OoSJFnp7K1mO80wAHoDnhFYjncqv3U2cr5UFQscHloG7VADDyJrVdrdAPgQ2sPVgbaol9JF+3wejqHD49pZksB3xDQjcoMP9B1uDytH4AJ5WdlVAkVk5mvNDTBtvArT6PUavKsekWSa4z0wD4iHhnrafJHx/YkDPppxwyBIrCRKW9S5M9FGkJQtRTRTYukVkhYZl7KFFr16cmfi96KflzkvUIImc9/pYoN3pwsSGgyDPzt6RZYaeEbZEsn5nl7eQBYbEEaAQZRNaGc8QR7AO2WXspnSbRWB5IObACxsg16yX1rQCmsB/vBeMLYbNpcUxot4NmlUazasi3LQ4udTVQSqKqJFFY7iOhVGpZNkOqFT5RaVd5his5T3W4Yd5SLNSw5Yipx42YOMy8+7Adtu6rfgB8MFh4sJq4ebEAH1PkECSEyV0lGFlEXbAMnP3hndXYggOmmea4xMqxyMVD7QWB9gGT/LuuoCXtTQSBKcHfgZx66Yi08UwtNAQYhXUloIPdEZG4nnvh2uFUUwFEGxffWICFPh4bwcTGfkd6wXJaVVh6Lo3r4GsuAGvKQESTTydCdaiMALaxaeuycMaTAYDvRvFA9FGMKPbl9vKwkkqsFwr7S3r0EsxGZU6qiPWyMcq4SehC82O25gAK5HcVbHGAgMozAcw8k0lsYwUucg7W/DPucYGCVcTJ8gyogEGHjwyA1KKNVL14f7zEhgOkIPV5qh4njG42eBxW/PFsJrkDJUPhF4BJrdzjgQhmZcAInuPyWYbS3OLMo3tVugSJMguDmw8rPk0TYb47DZvcZaypJNY36Kzq8uy7ISOdUOuqGSt1oL8uVSRy2FcY4i3msmcK66AsHSSAIKwNN0CKZBNbBtkKpyShINsypzzTlSXlmmPmAIlFGky6y3kbTfk5WViVoq8oqtpdWFJqyyqo6tDniyKZ1sszkklGBtujSrhkMbKTtfIbLmTLOrttZdag5QFVXMjCqhFreVFVmr3Y21hd7cHLhEWnYZtRkquGE5mHRjrG0Q2DZIuGkM6Vi8aTJUbCQxRk3vIzJDlQcP8hLe05ChsoQzDOBf2gssnAyMxQIPQi/DxPjrONZYLleRRIHT7LSiGnOO1U8SWQ25VXJOUUAjpZU6XXmUt4DES0wuLjMhMxSJIWgsOXqUpCJbs0f6j1T1VhWSeAogJb7IVaMEeUHiOYejzQJE2FVKO+MR/Cw87UXQZpqGlGValV0GnDIwuI5iKYPOgGIskgXP0Xi0u2dGa/iqCNIeuPHqGGWnizCq4RFEqDNUXtpI6WGPRe61prbwRuWd7yZvH7dscFLkGbi87AP+NDrswN+nfGqsWWTbYbENISe9dvPlATmzJw1oUlfiNS8UvrRtL39unnZSaM3ngz5c9vOMjJa157vZrPkfdaVsrFtZ8YX3u2umT7Vf/eLm9VNnds7T7jndvMEZaG6+AXaf6W7Kt5tvDR2xwbp9hO6seqxo7BhQcKw1vUY7atnHcnf35fmLyc+ytxkvFb4/9ovpT2848cot+Y0/8iVXHh+zKEV/wT3huWHaM2m35MfmHZ42feKl/IXY+4/RkvXRAQNCZPKYlqWpj7/DJo0wTPuz9VCu+/LqvcGD5pNLB330y6NN1UdaL56/+urorlkt0/8603118EsnUj4Y3r3zWznrxVGr1swat3X5odJN+ImjM1PM08Tfm9l/frjp+PQlwc7lX00ed3PqiksLLmfUHdp/9eNn65uS173ROGDCmjryN6s6jw5ZkbpiRYsmM3+3yYg+fwFsG7pivmf4rRJi3qlrzNjKO83fJk1atL501NMb9X9L/7JrJHrm0qFPNCMCi5+w1+uHPKIffZJwdbDu29ua6pcsLyodU8l8eeRG0HJr8zMdlfsmF84ZGMJu/yQp6c6dgUmFx5dOzBuYlHQfuBmfCDc0L7tEwccxcb6JY0yUa1jINXS9LXraqdMxFCdTe5NOItek9uGeRNJhPDQ8xBAC0SEumnP7kTClSLJ6Ts8NmRIFnR19lP041unFCWEOEPyy8mZxSbktN39GTvF/A4X2ZMZlxmkI1ZAGDfojaSjq/H9DQzvuFqHXLU4gqB7e4j+AlVowFP1hsDT6AbCk/5/AUkjds+LEKz9KA1ESgWgBD/nQhPta3oUNpTVMGQ+IG2EKZVd4fQhKIARaHkfC6v7n4XifPw4hMVVtUTAKTX6gfY+2uM+Eh/C5h0J9dWhif96w1fpI3BK5H0NTHmzfIzHmM/FhfO4tUdWf+/fSF53oyftYJiYuaq26r/U99bQnEOOBMDCiWG6ariyAZwkmg4PKLbYEpgkNwKJ/uyd+IpNHSZMw6AkdS9gRYHewCGmg0hCDAccQO45TLElBSmH1LXUcrbTDJlc5BcHpAfcExvD/AYJdgJuxnIablocY8hCYwVAOBifsFH4fzPgeSTzaQxJsc2bhQGzYojsFr0sj33IjrxWm5FRvHXVk4ZJUE+7J2rNKt7bqlZN3NDXIpPU/Pfv1wUHS/HOHD2sCXmON8ZvbZy5wcwP754SupC1rHjS3O39j5rGmCcvEYy9ONb3w7+HPnnjikd/NXzxT/tPC5NNMbfnMGZWrj3chFZa069Yz5MYj00bak683UsKQOcVXT2FdQwqeGr/PvG1v5at5K3ZMGjUwULD5ndo3P7lVU/pey82fdazOmbH+m/PsPxbeHP+rv5DsUtO6wRKLTr1S/eul5wexjdqjaw77Ls8eevHGOsPUc5vyptQWvn7kGXeXy7hbn1Kw5PPvLv5r7cSsrze8qz60ylH31Nzk6GXdOOOaJMPP/wFXB480

View File

@@ -1 +1 @@
eNqdVWtsHNUVdhpoU2FSUihUahGrVUod8F3P7Mw+zbZarxO/Ym/s3djYKKzuztzZGe/M3Mk89uGIhrokkZIINFFTIqgMdTa7rWOchAQCAUe8qhoRMGobVSZpaEgLgiY84qCUVm16d71ubCW/OtI+Zu6553zn+853Z7iUQbohYXXJuKSaSIecSW4Me7iko40WMsxHigoyRcwX1kVj8b2WLs3cI5qmZgQbGqAmubCGVCi5OKw0ZOgGToRmA/mvyaiSppDEfH5m/SanggwDppDhDD6wyclhUkk1nUGniGQZO+udOpYRubUMpDsf2lDvVDCPZPIgpZmAxUCRVIlEGaaOoOIMClA20EMlEUGeYH+sIGLDtCcWozkAOQ6R3UjlMC+pKfuZ1JCk1Tt4JMjQRGMEg4oqvdpjaYQ0AGUpg4pzu+yDUNNkiYPl9YZBA6vjVczAzGvo2uWxMnJAGlRN+0iUgAi3NazLE9pUB+3y0i76YA4YJpRUmfAAZEjwFLXK+ksLFzTIpUkSUJXELs5tnlgYgw17XyfkorFFKaHOifY+qCte9vDC57qlmpKC7FJk3bXlqotXyzEu2u3yH1qU2MirnL2vQvnRRZuRqecBh0kO+1fUxDw/MlJTpmiP+j2/1pGhkRlAPyuSXaZlDBeIFOjEVKk6DKPRjnkNz9TcUWgmstiTfYivd1AexxqUdLgpt4d8BT3eIMs4Wjrj45Fqlfh1VTgU16FqCESJ1fOqlzjRUtOIH4tcV+/Jst6kmTJ6MoMA5TRsIFBFZY/fD3rmXADamg/PDRfAegqq0lClrD1ZET47lMvynMXzYiarUIEhlpGSyOKEI9Utmo7LZQggoBj2Xh8bmKiuzFM/RnqlAE0Bij6WAzqhQpYUidBZ+a5a0bALHoqiXrg2wMRpRExbYqnKdXxhhI4Uolm59tU0bCAQePn6QfOpGBIS8C9GQxRFC9HQbsV44dqAaopRyhjPzUcDibdnVpKbBAVplnUzEAr+pJvxeb1ulmME5Pb7hCTjY+gXic0ljmQpi6lh3QQG4si5Y+btmXoF5so2CzG0h/GSThsdksrJFo9iVrIZl3swGh2ajmQM+QOcADjIiQjMzZ9dau7vCne2RcZiBGQE47SEdr23ZGkiwQmJpBIa6G9xZ1NCJBHvGWznk7pgxo127FtvxTw8k1VFRWDD3uxQphUTpXyM3+cPUBQDaBflIiYFjNjd0pfLr3Fb/lhfot9Mu30GN9gd7fatH8hBSxc3BuQ1HUJ3humQI76Bge6wIeDuDrNTNtlmvre/i17XHkvyeSiv1RK9am6wI632ZEk30BRDDY0OMpsS4TdUdQggDgFlf9BB97w/Gh18hYOQa/Fh2OhoJUd2VJXzjY5YmUxEfqGCYpKJQl1YRTM/JxxYGYkPyYkmd75zYC3l8WCTNSjezHSqOEp0s1oja1t9vWxTqqd9MN5ihheQ4KM9gKry4KVYf2UKr0L/P1E9fz9YaHgQ1ebeTSUVG6okCMUY0omB7DFOxhZPznUdFSNrQE+43z4SoHgPJ1B0QAhAvxCgQBM5Meez/e94KJRfCiUokxnLcPZhkQk5gyzLOBsdCgz5vcROlTfYT4vlmVRTv11SvGvHsprKtZR8rlzZ2fPqo3+gbpn86N6tAV/dzRu+PRRetvyt3d9rqtMvtH2y4XecnDqKTm5u6Xrv7u/seOMH2y5/OPmyb+Tx2prnXgNPn95Rtyl1Mbrt07N94EenLieWv38cD2aPbgo+8dHFoycfu3Ni13D2WyvPrf6yV/+sdvXbmdF9P1yx6sWJ/Vlr/ea67frKoV5+17hBv6+dSe+ffbbOl+/+29//OX7m8C9u0276sbvm4ce/eFIfWP2N002HjjlCWxzB1y5Yy2pCTz7y9LLwtjr5jZNr25+3z3/33yM7z/q06WnHli0HqG/+ZWppc+8H/9i46tIfl4cLn2eoD3c+NdFWm31n1vXLr+49M7j/9eNfrPzaqUt/umXm7NSI++GnuHNnizcEpyd3dP615dk3D46c1h6cPfHM9FuvTHXvufEnK/68x7N99N2762/f2jMV+k1/APfVts8OT92++8ry2Scuxx849enm86mmdt2On7tvFcA3TXMjv1/xn/T5Yx/Q1NcvRgd6bvV8dal2+wC952OW+n6kcFx67sJU44HSv8QNsd2Few69dKr1RNfMXRUxltZ8tnXjXkCU+S+tood0
eNqdVX9sG9UdbxXWlZWNgjTWqVLxzGijkWff+WzHF2NtjvsjaZTYiZ3glGXm+e7Zvvju3uV+OLahA9LujwJauamFVl02SBO7DSElSygFElhZWwqkSEtFpVRi1QrL0KCbqjKY1G3Zs+NAovavneyz7973fb+f7+fz/X5fXzGDVE3A8soRQdaRCjmdPGhmX1FFPQbS9N0FCekpzA+GguHIYUMVZn+U0nVFq7PboSLYsIJkKNg4LNkztJ1LQd1O/isiKrsZjGM+N9v+sFVCmgaTSLPWPfiwlcMkkqxb66wpJIrYWmNVsYjIo6Eh1bqzq8YqYR6J5EVS0YETA0mQBWKl6SqCkrUuAUUN7SymEOQJ9r2DKazp5uhyNMcgxyGyG8kc5gU5ab6YzAtKjYVHCRHqaJhgkFE5V3M4jZACoChkUGFhl/kSVBRR4GBp3d6tYXmkghnoOQXduDxcQg5IgrJuTgQJCH+jPZQjtMkW2uZ22xwvZYGmQ0EWCQ9AhARPQSmvv750QYFcmjgBFUnMwsLm0aU2WDOHmiEXDC9zCVUuZQ5BVXI7x5e+Vw1ZFyRkFgOhG8NVFr8KV2RsNE0+Y8s8azmZM4fKnL+ybDfS1RzgMHFiPk+NLhIkIjmpp8wBj+uIijSFFAHaVSC7dEPrGyRaoOmzxUo1DASbFkX804rvDW4muphTYUOusTCUpRmqFgflcFloT53TWcc4LNuaIyOBSpTITWUYi6hQ1hJEii2Lshe5lCGnET8cuKngUyXBSTIl9KQIAcoqWEOggsociYK2hTYAjZvHF6oLYDUJZSFfDmtOlZXvzWd7ec7g+VSmV6LYvJMR4sjgEhOVLYqKS2EIICBp5mHGXTtaWVnkfpjkSgGaAhT9WhaohApRkARCZ/le6UXNHHRRFHXiRgMdpxHp2qKTKl9vLLVQkUQ0K8X+2o2TZdnJmxstumLY0uV6bbmVhpaioR2SduJGg4qLAUobyS5aA4E3Z39IHmK1LpiADMMybpZ2OBwJBqE4Q9N8gubiCZfb9Srpc4EjXkpiKljVgYY4Mnj0nDlbI8Fsqc98DO1i3CRTr0WQOdHgUdiIb8alHDSvRVGRiCF/jEsADnIpBBbqzyxu7mzxNzcGhsMEZADjtIB+dXHluliMS8Tiks+v01uMdNIRCQqxls4tDVFd3g7FjhRksYQ825qM1mC4SY0LMMYButbJEACOWhrQNspG2gb0MIm8g1Ubd7BKg7tbVLV2Vu7UG3rk3mZ/fbhV4ppsAVpuSwu5RA7ucHGaEA2gnF/vaIBpZkc0ryRCMQeV7vD7G5KcH+P62m7/dkXIRxO98UhEoPLNTWzOzeSdqWaSItRTPrvXQgpWIKT7Km0DSNuAUtOwdTRpGrrUNF4LXybGZ1s+Ir2WBjLIg7KY81rCJYYR+YUSCgs68rVgGc3uI8QYGYH3bc9qOwI4p7VuxSk23cOIuMPBh7gU2541PM7upnhrt78jVL8VZtuXMMOytYCqkOOmnJ5yaX4N/f9EdTwKlk4BEFQWTqyijDVZSCQKYaSSrjKHOREbPJn2KioEtoI2f6c5wTo8LOdgEAddtSxFMaCezNFFb1/NjMHSUVGEIim8DGeOpxiflYwgxuq1SNDncZMeK59rjxdKhSonT68cv/vJ1SvKVxX5zs8/1Xbyl+eptVN/vW/61+v3TXx4/PO+W7551609j695rv3jlzcdPXk1+gUv9H7w6LaWixt/0bWn6TvX5qYmv7w/eHDtQ07xG0f6+Q/yh968vnbDhh/7ztgvPXHs6rXrh06f+9vrl+7+4kohBKbv3PX3RwcuM5++PHDhQT4gP9b+k3cORN77pzoW/e41Y7YKCBN7LmTff3bfK2cOPvKz6WdzY++/d27P1D2Hqbc6/n1299rLR+c3/rblow3HBw70bwJnqtfvvoIaV6/mz9/CP7Xxd/pt43dEPl7Xhf9z14tDD1x+5udr7u3/V/WqPk9o04GJ+yf776y60LXvnclPR/pXzLDxg299+cw/Ws5ejc+0fVT14dtvHgmtT14e/Vb1Pa/y1Y7cJ3/kNp7qim3t3X/7I89d0E5/vub6JWmVa13soVPvhoO/PzXTMpec2R8dMofGemburT6aObJtLjr52cTztz7dbJ1Pz7FXfnC859z82b2PtX3yfeMvVd2HR6v2plee3/nGROboidrb7vjDrtbo2Mmnf+PngNf702+fCvDiCxdPrNp1e8LTf/GBQ/ttc/gJ1Fmfmf5vWZmqFds/2/vnBiLT/wBOWZm+

View File

@@ -1 +0,0 @@
eNrtWctu20YUbbdZdVOgS5boqtDIpEQ9DaOwLTsxEkeOH0icthBGM0NxbJJDzwwtyYEXTfsDXHXd1JEKw01bJOg7XXfRH3AX/Yh+QS8lObKR9KF1qYUgzty5j3MfhyIfDg+ZVFyEr5/xUDOJiYYLlTwcSnYQM6U/GQRMe4KebDS3tj+PJT9/19M6UvW5ORzxvIhYiHmeiGDu0J4jHtZz8Dvy2UjNSVvQ/vmnD8yAKYU7TJn19x+YRIClUJt1c5v5vhEwAxt7Yp+ZOVMKn8F6rJg0jz/MmYGgzIeFTqSRI1DAQw5SSkuGA7OuZcwurloiGpk06w9MHhI/pqwVpzbHYsc5U7MgggB1LGHNylvHQ49hCtH/8dobJ55QOnlyNaKvMCEMDLOQCMrDTvJl54hHOYMy18eanUIcIRvhlZzuMxYh7PNDNhifSr7GUeRzgtP9uT0lwrNJ3Ej3I/by9mkaNAKHQ508a4ITi2tzG32APjTsfNnO21/3kNKYhz5giXwM/gyi0f5PlzciTPZBCZqkNRmMDz+5LCNU8ngdk+bWFZVYEi95jGVQdp5eXpdxqHnAkuHyxsvmJptTc8W8XchXv7miWPVDkjx2sa/Yd1cOMy37iAjQkXxmDYgQ+5wl53+2WsRttYOFWyvW9SZv7hxsWiv3bzv7ivQr7dZmU8Xxmhs1mmt78Z2jUN2QXCC7UqxWqjXLqiI7b+UhZFSq7Vc92ryxHK/1Vq3GzaXm+kF1Ua5Xt9rhvYbtdqs7tUiuNMqhtR1ul+93JA706qFDaZnuVNfvx/n8dV+t+mJH7O3mC7sbrVsFqztvgHfxIacL1mJt96DX2L5Z2/TKmPb2G5ux192xSH59qVfxFuXG6uL9zuZeIepccq9sV5A18bBsOVUr/Ty5qA2fhR3tJZ/bJfsLyVQEFc0+HgBkOlYPT6AO2W+/Difd9Kh5c1rCb540oCaT53cZzRlWyVhlbaNgFUrwVS+V6yXbuL6+fbY8MbOdluC5oVlPz7HDdGXcRfMGtLBUTC/E2kXVb7YlDpULdbly0QND4sXhPqOny6+s/udp9UNq03igmRHrRUIxNHEzObuHNsdzBa01no5bDQnZwSE/GrVC8nzUBt2jXpeSmFLvsBtYtSOnyNssJu6zyZFIitQMOIQClZzYVs1+Mtm6qMRTiN5CtoUs+8cegr5nPg84IDz6nkw3OFsC+L9/WUDDQII5OHRG+bF+uSwhWQAlnBqfqnFqtdrPrxa6UFUEkVql9uNVKQD7khq7EKjvXxaYqHhkqbPehTTiNDl/By5aNsYVWnZsWrHLAAUp1+y2XXAYcZyS7VScHyC5nICWNJuRkJBtRmCU635yngtwL506C0W7VCxDpPPGZHxuxe2GSGNQ80YkmS8w/Yq4iGDiMTSuyGTY2L29uL62/O09dLm0UHM8jJNhKFTIXXewxSQkJjklvogpjE/JBsuraHNxN3lWs2iJuEWnVGtXqy520VJza4h9cPKQJE+94oJZd5yiOW8EeKFahnyMWOWjQRpU2Pn9rVOKNa4bMPYpMEVKQQQICC32Onv9u/3Npe7OjTtyxbWPogOvqA5uNSv9AEhEtPegfCcn8lPSyo8KHAQINIRmoPOid23rlYSEoNAcZFWQXU25CQLlhLU0Bwqrm8AVOPZ1utFXwEAtF3xmMgLXU9tu1KoUGK3gdskhqU1PwOExTfKQsh4wVQ6U+BqnxDahRwyVDwkIU7VTMk3pk7nAeeBfGPs+UJ4vOtApbTVeyJlgnCuvBYEB4UykgGUnPDm6vHbt/wPnFLu7Xt/M8PrveBkUos0AmwEw7bEMsFkAUwRIIoNsBsiIFN0MsFlqrMvDDLBZAMMZXrPh1cUyI8pZIHvvg/CDrMhmQWyJEQx/mjPMZmnM7G5sRqbEKgNsFsBErNO/6Okzswy4WYDLbslmnGQ868yZAHM587N7slkgeztD69/Q+neATKVFZP5PIJpG+cAEUIJIt8avMMAfO31ofeHvdLmaM7XQ2H+xUqjlrp5tUaYx90eveEdvIegLWYgNx5SL6cLxK6xcVjBOC4TzDzpgYfQuFwxFklFOrnhspc/b0yz8zfbx8Yvsvt9o3l758Nq1vwCNNtHn

View File

@@ -0,0 +1 @@
eNrtVwlwFGUWTggEXA8iECDRNU0LEiE96Z4rMxOC5ObIZJJJQgImTvV0/zPTyUx3092TY0IQEUEOgw0IK8dqIMmEkEMqIEQuWURURCGehMMrK7hZtNQFBHWz/0wmkBS65W5h1bpl1xzd/b//ve+997/3f/8ibykQRIZjg5sYVgICSUnwQVy9yCuAeW4gSovrXUBycHRtliknd6tbYE5NckgSLxpiY0meUXA8YElGQXGu2FIilnKQUiy8553Ar6bWytEVncEPVaIuIIqkHYio4eFKlOKgKVZCDWg+nDBRRCQHQMoACf8EhGGRJE6UOPYhNAYVOCeAYm4RCGhVUQzq4mjghC/svISpOczFsAyUEiUBkC7UYCOdIohBJY5z9hqSKnjfdJub9bsFRa/fGipRlnT5Ru1AsgSMQwEaiJTA8L0yaDqQBoAjJYREnBxF+sYVUJwnBagFBk70aeQFGA9BYoD/qU/Odx9AApEyrB2tqoKuwfgyAqAh0BuS0MWAJGctBpQEJauKqrwOQNLQxKpaB4yM3DIw8K0kRQEYD8BSHA21y812D8PHIDSwOUkJNMJos8DvtNxYAgCPkU6mFNT3zpJfIHneyfSajy0WObYpkB3MB+Tm4UZfLjCYSlaSd5ogiMQZsVkVcIWwCKHQahXKF8oxUSIZ1gkzjjlJiKee94/v7T/Ak1QJVIIFVp9c3zu5pb8MJ8p1RpIy5QxQSQqUQ64jBZdW3db/veBmJcYFZG9y1s3mAoPXzXlVCoKAnx0DNIsVLCXX+VfR7gGzgSRUYBQHlcg1eEtfgJyAtUsOeauK0DcIQOThggeP18NpkltcVAuTAd58zRtY+FtMs/qyeC5oTG0KTIy8P8fNxiAqHDGSAqLElRqE0BnUaoNKj6Qbc5uSA2ZyfzIPO3IFkhVtMBepfXn3Ug43WwLoxuSfzPh+X8ahNz74sK4wUM5zIsACqOSmAszcW/LYjJS23uWFcYKdZBmP36y835/6Mk95GU25adpRWubC9R61irECN2XbGZgCa8BnBgLCXKK8VaPXtQRG+oLfCH3FMQLHcOKlckyAoXAyLgbG0/8b6DuiXKvBcXzPzQISVwJYUfaqcf91oL+EAFwwaT7bN9So9Xr9vp8W6lOl0vsu/KWBUiLoj4ZQusQ9NwsEVGzBxabyPmmMoeVT4+GDhaQ0NqtSqwE2FanV2wgdjhO0Xq206uk4XBunbPc1BApq8SWT5wQJEwEFm6xUIZ+KcZHlvkJLUBEalRZ6Gg97I+V00yDHbU3hfD6I8QgvACdH0q2UDaNIygGw3vUne1PmZCYaZyQ35kCQyRxXwoDVncFjLRbKZrG6EoRZlnkzmelpeL5DbVeydDGtmIlP98xzp8zO5fNmauelFDt06WVUko3CiDi1CgJQxukxQoErYN1gCi6lgDEKuU5nmpkn5qiy+PJsTQmfkaFyZJakaEoEzoOrSpMFyTpd48iIU5lNiQ6FB49LTU2zMyCNMrqEXIciL0WT606zOfJna+ela92qfLFMn8lWgIwMZUoS60y3qKczRugi7L0JsfEIXLCwYYoJgbLBYNlgvqLRG4i+oolHaH9gEhQDe2Q8Mh1uWibWWRGP5PgiDOA/bNw5jAQSMjkWnFoLA+MuZegEj0Ufl6bMBiaiQGHibXPnEBB4rqKcMLkV+mQ9bbSraE06MzsxQ9cvMjpci+GB4Ghxtc6/NG9A/y9RvViA9e8CmMm/L8HkspzIMjZbfQ4QYFXJjZSTc9Ow3QugPjkNMyfOkXfqlTo9pYzDNbReo8c1OJYEG2mftus9o9a3V3hJJ1x4pZTc5lAloLAFqdB4xEUm6LSwxvx7+GP1vTvXkUG3Ra0YFuS/QuC3p2dlTtGK03jY/CutYd/PV7zlvedg0fPGmoSOruhRGw5Hf9N5Mm/d4vEWxeIfrhxeTsW3T353eNrfHaV2e/7RhWEfR4wf/GG1YXbx3W+NsUW9ve6jA3N/+DEyyrZi6V+nRB4fM/nbH08eKjqeddpOf/mXa6/kbhtiaOmKsIVuO2zoRu5blfaOY+zbU7Ia70g1Tt55+9FJMe1dn+CR1Yej7xr63D2XM+ctGT3im4nI4qe39nxgAEtan376wpcjHjw5dy4y/uKEpJejs9MXzj3WvPWbpvbgZ1ffBbCi2Un/CPKunR4RvmXy/H9W7xfto8PR/a0LVj7anH9s15+bFrQcOGO6vKp7/dLzqa3n9JcSKxMrJswpMQ/nPnvdSCRN6+gQb5e+uG/b6Zr2mpGD3h2Vt6zjvY5P9gTXjG0eySKnXi6ckx9l4lPjh0Rtfs5TPfbNybOOVG5vWPvG1UHntntD9buouxtPpLku3hkuKNrP1oVuu3T2snG4Imnf4IaWz//Q2WDe++62gxdWZG7440L2koHHmWnee8OL5nuWWcAslbT2+x27N74afZp8n9j71Na8V/OGD/NsPjjm2+wvxSuFo3vWHAgZm3wRn7Xr0Mq27iV7S0K7l15tLWk++gheoI7dRP5Nt8ETGfH5zhHHR+4e2v1Chqi4Xd3J99iQD74oHWVqm7/3+ANhmxbF+fMdErTE8LwicXBQ0K2kiIOMt4YixvSbybqdzuvDJNyRYGeE73v5oYUinT9LEhlIylCfgCXRnaHOM2ebzLwmOys7NVFTlmMuNRan/xImSQp2twsigVbQysLrHK8QNSCFaC/+QrQK9RG8/rDRGT53RTfLVihuuOfD3B+65Rdg/J0wnwsK+50y/w9Q5nrKT0DkzuChv0UC8itQg5sOERrdf3iIGP3vDxFq/P/oEKEilL+RQ4ROe+sPEVZSAwirmtbGaa1KZZyGtuo1gKK0ekKr0ZBW8OsfIm4BD1XbAKG7dTw0uPkGD11pTmQhBd3XHT747dkff1j9zHeOly/URC8P3b8zFcnqmtm57tMRXY2ZYd99NWTCUXPHqAupSzdvPvkm/5xq3PjI9pHm3cvd510HesKqnzj8eXlP3UNTFkw9E7Xl2oGzUVM121d91IUSDddu22ApyN9g4B6TzwzrDlo+btsjYzZe8uwtDC8LOSEHb3Q8dX5+csXSu759T9j1+vrUBVWVtt2LI7ecWRb1bFjSlOEfdb0TUb42fkfhvgzDk9kapGz9hHE0svEB85En5Weo9j9NM1YfuvP9gy8i1c/veQI5sSwh9YGJ+PIP469IGvuQhu8bVosfPBwfslZ8ZdKEqB+mbpLfOVJT9+rQxk/vaWw6STqPBU1aPYtF16wP8byxuHzshguWJ9ZMzTrfTl8bcvir5RuzMzN/NH18NoKyAu/XF21fVxcwC8M7dZPNxXUP4yPF4qJLoUwo3fbao5UVpsqVT7Vcjbp624nIKY8/WBX0+DX3OQabZnjr0eaiSffdcWzYJ3HF0RHjau141vZz1RPv/yyHvvBGZFHIoaH3flV/GexSvFjfMkm8vy3n8tBeroh3rGNT4UHhX2E+8rk=

View File

@@ -0,0 +1 @@
eNqVVnlwE9cZtzkSh1JKmmYgmTGoGwYS4pV2dUuOGx/CxgRbxhb4Koin3Sdppb28h2TZ9UyBkskE0mYHSBNKpg22ZXANBkw54zQUCMz0dtzO2FCTpslASupp66QZmkD6Vge2Y//THY32Pb3v+H3f+33fp+09MSjJjMDn9jG8AiVAKWgja9t7JNiiQln5QZKDSligu2q8db5OVWJGVocVRZTdJhMQGaMgQh4wRkrgTDHSRIWBYkJrkYUpM10BgU6MzilsxzgoyyAEZczd3I5RAnLFK6m1khAh5sYU2KpgBemXG/NAmZKYADQoYWiIQ4BekoHh0ZaRDQyHDLmxjoL7yqlf/KrEIguTa3c7lnphWbyqyAqANsaZKMNBmgFGQQqZ9J2o73TcHMJsUsIqFzDRJpo2VQRFPM7ICK/M8DgHaEYWeByhwXmgqBLEAwKQ6Dhgo8aIGDKZbXZCbMX/Py2so2NzASYJrB6JKkMJ0/ecQEMde0hUcKuRRIHJigQBh7mDgJUhypQgsPLUDAZVPnV3SPT+EqWAB5x+mkmiX1dDEnQqwWJaaNZ0IyERSEgZkULWDYkSumtJYWBql5VCS8irCFYzJqs8n0BqFCuotL6QAIN+QMFkEKIIGF6PFx0hcjESpHW9rKlJQSEQgZSCBDs2d/SEIaARhLGcxV1hQVa0o9Np1w8oCqIkQZ4SaGReOxJqY8QCAw2DLFBgL7oFHqayofVGIRRxwDIxmExraceAKLIMBfRzUwRdU1+Gm7gOZeZxr35BOGIYr2gnvQhESaWpJoHqgzeQRgdhJI614rKC4mYR33EWIDxJMXV+fuqBCKgoMoJnak9LppWPTpURZK27ClDeumkmgUSFtW4gcXbrwNTfJZVXEKu1nrKame4yh/fd9ViMJIk+x6dZlhM8pXWn6HV6mjZUpAROCciI9gaRpAQhykBtNPdBv58K+gNcUZ3fEvNbAhFPzMFyld6AnWcb1Von79vokNpCzFpbiTGyaf26al+8CicdVqvLaXcRZpw0EkaEArdXcrF4bcOGhMjzmzzlDRTXsq6lPFASLfFz3tKm0rWkTfFtiK6vMEf9PlaqdoQ50OJItFRG1pWysKKkwZMI0eIGX4uDV8iGoLVGoBWy1Ot0rPfVVjdWEy22RiHibamzSaFCA4Ksxhi6iKvdaC8JcYlgIF6/NlaxRoow6yo3EZsIm9PjquQCpRba72+CSn1TaApmh43AiSxswuok9OdoljIs5ENKWOu0OchDEpRFVP9wRxIlUlHl7V2InvA3V3syjfCg97lJZj/W5UFU1QbLJabAQDoNJaJkMBNmm4G0ui02N2EzVFT5+soybnyzMvO4TwK8HETsXJOthB4qrPJRSPeWzVoDg3oNoPvV4aP2g8NWUZAhnkGl9TXgtekRgFd6BtIFh6OOCXimLeVWG0wVQ7ytNU5TKk2HY3GOcLVZLaiTqFTwZEYF9Q3dDQKEc7LWZbbbzEczR1k+9qJgCZxEuSXfasUllAuW4RiU0NQ3w4sq+tb7OioJm57yczOlMuMKedAlzswUUIQoRIMtmb60EWyqhAQ5xHYd4nRnVhd63ppdctKhLvXm7EJZpw6Xy2y2fsWSDL8SXCfJyedmykx6Is2cfGamQMZLl83JyX2tWXmcobWRFWjjt9sdpJ12WkiHy0zazRQBbAGCouwOGyRsdpfjrN6ZKWRHp5AoSAouQwqNeiWhjRRwoFVveEUW0maxo8wVokFMsSoN69SAR9ADlQsNogT12dpPBXEKUGjSpVmv9Xgaq0uqKstONeBT6Yt7xfTfjB5ekHkmGEzWQQmxQetNDQ/UuSWYLCvHa0satZMui9kKzRDaKcJqpUknXop6YtbafbJ36W2/B7AIe4zSBsKWIsxttVqwQgMHipx2K0Gk/oxsS6an0OXcL5bvystJPXN311UJ14iFg5/X5z2zaueecy3vN7ePLmo+PWeQfmrhmiRnO7HiFrV58Qe//MYrj1yrtGz9ztLty5bf+WTtMqz0+0tefnirTe37Y+C/nwxuudS/8e6/N+7/jDv+zs3lQ6tuDw+c9YIh8/B/8l8rHBh/NXKy+Mq+j7cGX3azA78bWtGIL/m586HizgXfwo8M8fuv/zY50XqPeGnPRwUuMnr5saarV/tfWFy66K83Invn39u2+9Keiaodfx5a8O3bO/JyOz3tc4eb+Px9i1bOvbpXebJ5ZOhr5JwDtVjI98Ib4mf55TcvPLnas+WZic//8rcrv/59/b+4R5/t45g/jT93uOLTc8Pt+fP2bTkx9rOHx26E6i/n5Vb+5O27xX+PWBNPT7zozaupu/NAd/QXF4H8w7HiU7m3u68L7+Ut6aKU21/YzZe2foQ/+tPXD9aurmi+cKH39CPSx+cxX/dLo6NPr977Xan55gOHv/6j/PZlr++c99B7ptEFo4fKz18/nP/8wYGRwFj3TjBe/s/357k/PLT03aEPPyh/583iisSt7z3L7j7x+K5thQvzP51f/4Th8cF7AweazxY+eKbv3qq7OcTzo0CL7pgf+rF32a8Wnmq/039s7KZhacH+GwfAuHP/qSfmvzK8i6feHj9ycaXpbN8fwhcvjhYfnvjH3W++u2Js5VOeK7dM6Jq//HJujrk3ea14Xk7O/wDBk1ep

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNrdVn1sG2cZT5sxJg1NXVWl0iTWkzexUvKe787n80eWjcRJvCRNnMZJ1sCQeX332nfxffXeO9txSBEZ7B9GywlEN5WKZkvsYkLarqXNOjIYowiham2lDS376KZOY6qoJm1ITEMp5T3HIYnaf/gX/3F3z/t8/Z73+b3P66lKHllYMfRNc4puIwuKNhGwO1Wx0D4HYfv7ZQ3ZsiHNDCSSQ887lrK0S7ZtE0f9fmgqtGEiHSq0aGj+POsXZWj7ybepolqYmbQhjb+1OT7h0xDGMIuwL0p9c8InGiSXbq8I9riJyJfPRkXb11x/ExlaiLJlhMmzYFCK5rl7CxSGGnrUN9lMrfnWtCnHUr0Aa0KUmKy8faugHVM1oEQXlJyiIUmBtGFl/Z5kepIHXiPA/bbsaGm/5JckfzxjgoKCCWas6ECDkoINHRAgQIe2YyGQNqAlFaCao8fMrJ8LCoxZBP+bl2/y/62cbxHklqHWynEwsny1Fc2QUA1/1rQBb3jl6URkyRvbFoIaETJQxajGA80khPSik1WGDk1WZAQlQtcrDVtmZAPb7vxGCh6HoohIYKSLhqToWffX2ZJiNlMSyqjQRlWCXkc1grvVHEImgKqSR+UVL/cENE1VEaGn94+R8ubqNAVeW25VV72yAOmNbrunEwREW7d/YJycFZ1i6SBHMyeKANtQ0VXCfaBCgqds1vQvrVeYUMyRIKB+Dt3yivP8ehsDu7N9UEwkN4SElii7s9DSBP7U+nXL0W3CBrcSG7g1XV25li5AswzNn9wQGI/rojtba8TZDc7ItsaBaJAY7jRTFg0jpyB36dNUSsyk0lqrkYh7GyyoMnQgsvhOmAoPjPfE5K6BXEc8m97XGeO0hJTePVAAbIiLBAOBIMcBlmZolmYBcvql3GBAlNsH421DhfDI7t6SmAwybK/V7bAdOb3EBfYGeh0uI/eMFYdjI85jQ5ZhyomOflW3u5VMHg2hIh4Z3tsed5jQiP1YtygXWiiCzskrUqusBVA6nuzXVCFt8bkIEvQunBrtpEfp4fZSfJ80UOht0zvbpGTfOng8HwRMHaHA8GHG+82vckNFetaW3eeD4cAxC2GTHBD0ZJlsme3gqRnCQ3Thz5X69Hsu0btG4aaZDsJJd7HP0JspjqUSok1xDMdTbCgajEQZgYr3Dc3F6mmGbkvBk0MW1HGG0LBzlfIVUXb0HJKqsduSfdEjO+mkB58cT4CKpoERqKNy5/aCwZW5D7o7Tq2cLEBGCtSVUi2tu1hjfaFULEiiI0lyvqAxkRIfUNLIETOn6y6mZXhpCCCgYXeGZyLMfF21SrwqKZYBLAMY9lwRkHOOVEVTyIbWnvXbh/gGyW4v3GpgGzlE7qnySjteXm9gIY0Q1su9FoWPRCK/vb3RaqRQJExafW6jEUbrsbCchhduNahHmGVZQcNzxVUHoEju0oNESDHhtBQgzOE5nuEy4SBMh3hOFNMRGBTFgMC+SIafIpJAXjtNw7IBRiK5a+1xd6lZg0VvyrQG2GBAILW2UIouqo6Ekk66w/CqwC2UaSHvIjge6wIxKJKxnKwx0K10jPa39XXHzuwF66kEEubKPV/RDawrmUw5iSzSGbcqqoYjkXFpoTKJNdg26p4OS0KAC0REQQoJvChxoJ0MotVo/yXejDdrK1Al2POie0oOtPqiPB/wtVAabA0LPMPU/g18r+zVqmfPb8I7fnhXQ+3XqO55pf8dZsviR1/reuShnX27vzoRo6Y3bx32z7bv5F5Tr14tPp09s+3Sv1t6hOv6F7bvuHL4s78tfpK78+gTv2o6qhp/vOfy9ocOHbz40XevXV9+4cae/Y+2vlRaeHHh4/2t1w5t21/9ztClGztOfzofPcF3HLy6U5UPX7rrwHzqQPWDV8898nrJd+fY5YMv0D3Tvzz2l57tH559bjl694TVNAreFxobikeusGn5k1984+vP/u6BRu7wPW+kW+44uvXze/c8tXTyGWrXe+7Ppr79zODNrtfaSneM+8wnr/34lQd/TgmN91//+1b6cfVPb8IPbzx9/Gzbww57ufXiclPq3PsLxtb7ygyNpo99JX9k0zt9P3j8pjPAXt4ye2/p98x9I8OzoafGCh98efJh3J5efrk6/4/Elw43nQ9fnAgl//rm/W8fOXPhn29NPvGvZ8+Xp+9+94uH1Pzg8oWp2Cl7+tVtP3l7+Q+cUN71o66534wsRScaGxpu3mxsCP705MyBzQ0N/wEq8t3d
eNrdVn1sE2UYL0MBJ4ZgAEEEmgNBYG/v+rGuLRJXxpgT18HWgJtBfHv3tr2tvTvug22QQWQoCQh4fMSA8jHWrdJMWBk4BwwVI98BMUCsCyAQGQaMkQXFBJ3vdd3G3PjwX5vLte/7fP2e5/k9T7osvBCJEstzfWpZTkYipGV8kNRlYREtUJAkL68JItnPM6FZufnuKkVkYy/6ZVmQHCQJBdYAOdkv8gJLG2g+SC40kkEkSdCHpJCHZ8p+SEpbTARh6XyZL0acRDiMlMmSQnToEI43FxMiH0CEg1AkJBIpBM1jFJwcl8hlgiaRUamMJfEvB+EUkV72Iwm/S3g9G9T8aBd6CQbRK0R5SqddXIYNJV4RaXzRKVDEAL7W3g6iIxdFCPCQMZSwxWwQMSw08KKP1E6CdiJxckFcFlL2K0EPyZAMQ2Z5BVDCShiwxHIgCBlW4jmAkQAOyoqIgIeHIlMCA8WGIsFHmlKtlFAK/psVUf6/y2de+TxMAJ5BGlo6ABUGATNIBdiMQzIIQBmTjigP+xFkMDMv6QaH/Lwkq9EebNsNaRoJMkAczTMs51M/9S1ihRQ9g7yalwiteYzTWY0UIyQAGGAXovpSIMmQ5QKYg0DGxeEVWd3pynXPz8qek+mqaXeq1kFBCLA01MzJIgyuNsFMoBW9pzii8RfgBnGy2uDsgEnOKsOzw+kpg8VuoOruDx2AGHGNEJcfuF8gQLoY+wGJuVRr2o133a/DS2p1DqRz87u5hCLtV6uhGLRaumUpKpyWqBrOmNUzXELYGS5sNhiN+Il28yyVcbRa7YUBCUU7m9BpEzFRJjOgrIAyNnTzjWSxDNA8DqFWUrs6KhhAnE/2q1Wp1rRPRCQJmDuoogabyYq0LISbiU4dCyd2xI7cmV1UGB6ajhurNs0Q2RS90aZ3CqIeh07VGy0OM35S9Vk57tqMRBh3r42KukXISV7crMwO3oRpv8IVIyaS0StjYkRXxiKOH2CDrAwS6xE3UjuqIQtFUbHxD9UUURBXRosYMtvt9kf4xZVBsrpXyw9QFmC0uRNZmgp7j8NygoLpGd+0CVQ1GiqMa9Ij9buwddiMfwybByA0F8Ym9GaNR60HxGpbPNrkR+t3QUzYTHgcmwdATC2M6Xsz/1f52gONe4jm/YVr19Y/VPuBJYskOg9YRj2If8+njMYMV5bLLOVbTJkSnFZiQaKL87oau/zjpQ45dlGc3ZpdbJzZbjWnMmYPQB4vAyx2Wxqw201G4DGZbIzFZkyzMNaqhSxUI3jI9T6e9wXQbtoLaEjjnd0+g2p4eoHLmZOdUfsGyOM9PCajG2LScjyHavKRiMdejdABXmHwohVRTcYMkOcsUPfazSYLMpkpK4PsNq/NA6bhBdUxjZ3TFtK2dPzfwTt45kV89U2fPWNWDdDFP32Z2c1FI50D/xpa2aLUbZl58tDQJBAc87z/6eDEGVPGevufWWMdm3PhyN3LJ7kXXr2xPj2rhfh5+Vl6VcV7Fxe1Xjn69ZWWXz86unbLkqYv2m6nb7sZ/uNynXPKmpdzo5OePNJvVajRGZm9eg+sqCh1fb9jpnvq7l1zGyZv/Dg2p/nO7976S1zUOSFmG/nZUhe548iKTYc2DFsatb49dXJFTtKC7V8mu4ffTT/93YJ+TvbHyXcvtQ52REfv/fCpAeMHnxnhqLw64bXFwvmU67oLz+2s7F+1QT/k+sDKDyadu9Woc8713Hhmv3uEvHT5xMJB+ed+G2DIvrP/c+eOW+fPHku/dsw95PXt12Kbg/2cyQPde7eNzW+tpd8tuDKx6NazGadOJo9qLH8pr37FprCJeiJ7a+uJ7HWHR2+11amZx/Vnxq3eSs5+a974RaeHjaouaD5OrTyy6Xz65oaVdsuqoz/dnVv6d/P2ze8fOBxaF808vcS3jzmoO7F2Yz0dkHJ2frX/XtNUeIret+X41nrP4qbGpuJv1ZtlLclkQQmfdYEshbfb6LxflJaLF/+8N0ina2vrqyu2XU1an6TT/QMhrxTS

View File

@@ -0,0 +1 @@
eNqNVQ9QFGUUB01TEZNySkvh3EFNYbm/4t2pJB5IJhwEV4AE9N3ud+zK3u66u4cCYolUBpYtCIikph53chFCMpGjNlqTWqOOjaVcWua/STNrpnIsNem74wBN/HNzf3a/937v/d7vvbdX7i6CgkhzbHALzUpQAISEbkS53C3AxQ4oShUuO5QojnSmpWZYtjoE2juJkiReNCqVgKdjACtRAsfTRAzB2ZVFaqUdiiIogKLTypHF3oulmB0szZe4QsiKmFGt0uiisV4XzJhTigkcAzEj5hChgEVjBIdIsJLfIhXzPosEl0rI4v8xYglQJATaChUSBRUkRzjsyN2IlUX3+fceIozIOQQCnfXZHAKDjn3fRqy3Cp60cdZFkJD8FaA7pQjsPANj0CVWVpZblosYcyT0YQgGOEiIa/HpuMixLJRwBkhIJKzMTUFAIiV/DApzUpwoye13qbMdEATkJRyyBEfSbIH8UUEJzUcrSGjzRfEQvoh++WVPIYQ8Dhi6CO5YiosSoFkGiYZLtB1yDkluNqda8pPmv5xodvUEldsAzzM0AXxw5SJEriUgJe4r/W6zxyc4jrrASnJnfC9NZVox6jWrUMXoDDGqtttTMwAxdvF++67bDTwgClEcPDBHsqsH3Hq7DyfKTSmASM24IyQQCEpuAoI9VndHlYKD9RUqu01pd6cLGPvSubUxajV6t98RWSxmCbnJBhgRtvc1oQ/j0ag0WlwVi6vUnXfEhpJQjBMcSiFvVrX2KshAtkCi5K0arWabAEUe7Qdc6UIwySGWO1Ez4aGD7sBQb0ld0D8KY50JqLHynnkCHa1Q6xXxvKBAqacr1DqjVmfUxSqSUiwtpkAay4CNarcIgBVtqFmJvXPjJigHWwhJj2nAifFi/RULKD9D22kJD6wzaqTvVnbqVCqVd/J9PQVoR8r4Mjq1BoPhAXGRMlCSO3z14SodrtZbAlWqFw6ch2Z5BxpP/6MhwMrlY4V4TXugfz+3Xszkh8Dcg6FmoXfKQGi0andRbNL7s0U92L+fYgAz5WEw96AYu9CrGAj+P/l6EkXex/N24Xq8Fff1vqdknkDncZqUd6PrfJVabTInmXXZMxLjs6hkQJloW+JC8/Sd/fE5oQCwdIl/un04b6TWEKudTmqtOLTaSFxn0M/ADQaNGrdqNHpSp1fP0JGxW4toIHvQkisKOK6AgdsJG04AgoJ4zw7K7oRsc3zKfFNLFp7OWTk0jBaAhpblWOjKgAJae9lDMJyDRA9aAbpM8/D0+Gy5w6DV6KAmVg/1ep3eNgPic9EDqncb+7bN6XtK+//NVqCdF9DRl4NgRNWwIP9rMPp0dzMZC7iTqpG3oiqOmN+zlQXFjUpc4t1qKvXWTXw1ev9xLMPz/OGQylvX4ouqi66MWhFcfT2PDH9nhyt2T8flL721y3/89c0/xc5dN67+e6x02d9PLzkdt+7ri3HpDVgY85Vl8pD8Ec+eTZC/s7geq9vdnHJOnbN+T2PuoUhVzZG6qgv/eK07dbX0oKj29dcPnAuzXEotjlu+SDxtCF4tq4ZTr4cGl78aMXvflauhXUfPb8wC/Pf7JlOPXpikyJkz9ASxQPz094vF6WNqvv4rKH5DSXntzpmgdSSz8hpQhDTOppJ+mlDf8GLyN4PPmkd8cuTYeFbzRdH1w5m2yOTIEWDmoPcPVgEpWvFX4+Mlq9a1pYydOXVHhGlL+CtrT62JmzbWeEJ6o2N4WUTX7xNDsleFP/NLeefa/ftOhO9SHTwY3fXEeOvQAx+YXrc/v2fvyLQR5qkbsZqMRYnmcQkvffyouvpkXcKEzp/GlJyqLxvTmt7wuXL/4nWj13QMKWZG57xAnumsWX2rLfP8m+8afsg7fShz91mrrYL/Y24FMYeaNTFzdEZX9aqQ1eGzfjPFnZpzyf1zFqgZtkRzQTuuamJzyE3M+XYluJh2Oq8Ry0l+sqI6qmH8dtw1euqKBKay++zHB7411hmWjeu8/GH1scK9oZWFe9MjsnRntk1yduc56spvlniHNK96bp5IbrKua3Rvqg0LXbxh2KS25tD23EIzMyp081vJp95rNpTmN8y6teZy0jm9LbewK6yqub4+M7OWbrtE/RD5dFNr94Ts8TfePvrZlTdalFEF9U+lpr4W7JulwUFReZXHyUeCgv4Dl57hHA==

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1 @@
eNqVVQlwE1UYTkG5VMQqtMoIIRTQmpfsJmmbZCi2tBwFmnSgaCmF8rL70myz2V336AHWoYjjiBZdULAql6RJibUHVqkn4gUiakcULSA6g6Iz1dGRDohcvk3TAymHmUw27/3/9//ff+6qcBkSJYbn4hoYTkYipGR8kNRVYRE9pCBJXh0KINnH08E894L87YrIdEzyybIgOc1mKDAmyMk+kRcYykTxAXMZaQ4gSYIlSAp6eLrycJy6whCAFcUy70ecZHCShMVmNPToGJyLVxhEnkUGp0GRkGgwGiges+DkqESuFDSJjCpkLIk+nIZsJFEi40F62Yf05Qjih6hnOHxkJD0TwGadhipjLzh6g9ESr4gUvugVKCKLr7Vfp6EnIEVgeUibyhk/E0A0A028WGLWToJ2MuMIAzg3ZtmnBDxm2kzT5lleAZQzEmYtMRwIQJqReA5gToCDsiIi4OGhSJdD1m8qFUrMlpRUQqgA/w+FwzFQkPKhYi05OF39okCCDwVw1VhDVdWSqiU4tTyNtJAoFio0AlaQArBtDsmAhTIup6Eq7EOQxjU/prst6OMlWW25rI5NkKKQIAPEUTzNcCXqqyXLGcGop5FXsxKhNIvRRlEjfoQEAFmmDL1WASQZMhyLqwtknEFekdUdLnd+8aycB2a4Qt1G1WYoCCxDQQ1uLsXkGmI1B1pMl4sjWmcAXEVOVndl9tA051XiruT0hMnmMBHN/V2zEDMOCVH52/0FAqT82A6Idbwa6gY39tfhJbUuF1LuBZeYhCLlU+ugGEi1XRKlqHBaoGo4K+9ydzFhr7uw1USS+NtyiWWpkqPUOi9kJdTSW4ReTMRCWKyASAUEuesS20gWKwHFYxfqNqKxJ4Ms4kpkn7rdRpD1IpIE3GDo0RCGyYq0KoiLiQ7sC8em72X33L5WSAxm48Kq784UGaOetOszBVGPXafoSZvTanOmWPWzcvMbsmJu8gcsVEu+CDnJi4s1o6dvwpRP4fyIjmQN2DEdhr6IReyfZQKMDGKLBxdSO6pBG0EQHZOvqimiAM6M5jFodTgc17CLM4NktVWLDxA2QNrzu6O0OQoH9sNwgoLbM7rDYqxCGivMK/ma+n3cejCTrwMzMMMUorBjykBoPGqXUayzR73dd239PooxzJTrwVyBorWwQz8Q/D/p63aUdBXN/onr1tZfVfuKKYvEKg8YWn0H/y8mSDLLNcuVkuaF4hx/djZX5vJYCly+N/vs480POWZ5tLs1XEeS1ZFqTaGtHoA8XhrYHPY04HBYSOCxWOy0zU6m2ejU7WUMVCN4yPUlPF/CoibKC6KrG3TPoBrOXuTKzM3JaigA83kPj5sxH+Km5XgOhRYgEY+9GqFYXqHxohVRKGsmmJ+5SG11WC02ZHE4aA9B270OCkzHC6pnGnunLaht6eh7txrPvIivPh40afyTw3TRz+Ci+Z9xR4mbz42evnHpO5k7c9sPjW+/o6pozdE1nsy5RS8W1IrZpXdP3Lc7YWfasK/W69d9WDvhzG/2tkM1h8c38uPeOvf32b0bz0y5MOKH3WUHz39//sKRjpQvNjxROnqHx934kOPjuzIWPpBFfttG3rimUZnmGd56tu3tI0z8hCO/N53elfbt2Xf9L7k9SScO3/9lU27N5uSHqwpPE6MOrE7NGzKmuXp0xUebdeEniPsKVq3VbZto/GRv/NjnicVF7+/N/2BQXMLzHUO2TnGT6xJfWvTm8X1vFGWi9oODGlYKJ0zbE+WW9iEvLArfOPWXIrmzfuuGT89lxLX/EhzBLt4yd5x1z9Tm27aufjLPvmCpXrz/u7vSudCGIa1d6Ttaun4k195SDxN+YM8mVdes/3XmzlP13yQsq7u4ddScxsyxw2aOnBAfPP7PTW22FzeamroKs4eOGbS25Smjr+n9abcP77pz4YOqNKf6hrdWOw4kul9ZOW/8HUlxc5c9vbAA7d83vy48NfUZyz2b7t1d+13c2I+edY1Q3Urx7T9PrPQv+ynx2IbWXJ93+Ij0LTnLZp+ccTS58/vDnS2PRR6ZV/jg0tDnnmMFgYzqtaV7MipOtGUkHDSuXJJc9vPSC6kukPv4vPqcos7ZH+yZM6Y85570MfHuxufiqeP7l//5+2tU7Zq7jZtmf/36rpqTvw51/j2vc/bIP06mlW853TSplji38T2uKPnUyOaLXX/9datOd/HiYJ11p+NMzWCd7l+O/AW4

View File

@@ -0,0 +1 @@
eNqVVQtMFFcUhWBNSesPatQaZTtQCHVn/+AutFVYkaDlIyBVROHtzNvdkdmZceYtHxFT0aZRInGMn0YiFllYu0GEQksVq1YtNRVNtaaKMbSptpjaRq1S06TVvlkW0IqfbiY7+969597z7rn3bZW3BIoSw3PBzQyHoAgohBeSXOUV4So3lNCGJhdETp72ZGXm5Da4Rab3dSdCgpSg1QKB0QAOOUVeYCgNxbu0JXqtC0oScEDJY+Pp8svBbAXhAmWFiC+GnEQk6HUGk5oY8iESllUQIs9CIoFwS1Ak1ATFYxYc8ltQuaBYECxD2OJ/JRDzoESJjA2qkBOqSiHAL1HFcHjJSCrGhcMmEJXqYbB/B6Ml3i1SeGPY4BZZvK18JxBDB3ILLA9oTSlTzLggzQANLzq0ykpQVlp8QheujRY53S6bltbStDbVLpCljIRZSwxHugDNSDxHYk4kB5BbhKSNByJdCthizUrBoTXExeuEMvL/oYjKyuWVy3HVeBoqbCkWuGlIGsk4EsM4iEgWIKwUUel1QkBjOfuCJnmcvITktsckOgAoCgqIhBzF0wznkPc7VjOCWkVDuxLFRykR/T0g+4ohFEjAMiWwvYyUEGA4FgtHIlwc3o3kjzMycwtT0/JSMpoGg8qtQBBYhgIKXLsSk2sOyEkqRX/c7FNEJ7FAHJI7k4ZoarPKccNxKp3GZNHoWh9OzQLMuEnw27seNgiAKsZxyEAzy02D4JaHfXhJbkwHVGbOIyGBSDnlRiC64k2PnFJ0c8pBZa816/F0AeNwOq9Ro9fjp+2RyFI5R8mNdsBKsG1YhGGMz6AzGEldPKnTdz4SGyKxnKR4nEKu17UMVZCFnAM55QZjvGmfCCUB9w5c34RhyC1VebCYsOeUNzBYezMXjrTCVM88LKz8xXyRUav0ZlWSIKpw6jiV3pRgxI9BlZqe22wNpMkdVai2XBFwkh2LlTLUN17K6eaKIe2zjtoxvcTIiUWcn2VcDCIDdwoWUlnKHpNOp+uNfqqnCF24MkpGj9FisTwjLq4MRHKHcj5SZyL15tzBUxos+aPnYTjBjdvTfz0FWDUprDCvN57pP8JtCBP9HJjRGRp1+b0xo6HxqD1GsdHszzbr2f4jFAOYmOfBPIGiIb9XNRr8P+UbTBT1FM+HCzforXqq9xNL5gsoTzK0fBj/LtTp9daM1AwjvXgRYpKd4hJudn4GPd92cCQ+vtQBx6z2d7eC640yWuKNcbTRRkKbnSZNFvNs0mIx6EmbwWCmTWb9bBMd31DCANmHh1zl4HkHCw9QdpICFL6zB2dQ9s5bmpGUnmZtXkJm8zYeN2MuwE3L8RxsyoEiHnvZR7G8m8YXrQibrPPJ7KSlcofFaDBBg95mMtvN+LGRyfiCGprG4WnzKLe0/y91HZ55EW99FXwjovrFIP8nhM7+sua7uS//M7lYU3B+8bKSSd9M/lyduiM8eXvi1ei8mLrm33vDfPcTFxxKaTZvvnbv7772t/qPxXaAVq6zb80Dw5XSZRFH1lZMe/vG1J9v3773S46zKjPyjNpwoiY5vOPawffsbM1P9aHWtG3W6bbuXc6BtQOtnxDJRy63dN6rHSiZvmtitXnjB39+/Vf0r3eLucK6+DcHxsQk30xb98JhR8jM1shjq6JyQ7StMXMvhef1W8+MO/nOvi1CVXjFhEhff6hNE5delBrdtd5bty1o+48n4zMmLv308na6qGVD6o7xupX62oHtXX1gz6x1M7wpwX8U1DJ5XTU94y79MM3jk85V7MoxT7hwbElP2Z6YU9fHlAnZsSkfXZ2y9f3VyTNevb4gI+pWgb49M2v8S59NTToYlpjSVSOc7uo5d0etDlXvTYxtHLskLLWAmLll4KaGkk+eTfntUGRs0ZWNZ9ZuUme90lM9fnZEz4mj6e3151/LQQsunp74oXT2IlpYI+vO19vnRmzafLq7sL93ygV0fM2Y6ig0RTOntmhF9+6wfPnbsVs77vjCVgQ39C8Ii+3Ilrjk5fdXVIxXUwO7d0ZufL/kzq3b1nCiu65kUWj/4a7ObQPj3j1+d06e7+ikAl67cw5W9cGDkKD9dd/vnhoSFPQvid/bWw==

View File

@@ -0,0 +1 @@
eNqVVntsHEcZd5UqIEJBEVFLkVqm25AIenPvc+8OFOE6TpqaOIltWhqIktndudvJ7e4sM7M+X4wFsSukCvHYBglERKsmZ5teQxuLKlVUqKo2EqUU2qrQyJUKfaj9J0X8gQQSUjDf7J19NnEeWOe9m5nv8ft9r9mpuTEqJOP+daeYr6ggloKFjKbmBP1mSKV6YNajyuF2c++ekdGToWALn3OUCmQ5lSIBSxJfOYIHzEpa3EuNZVIelZJUqWya3G68ef2hCcMj4wcVr1FfGuVMOptPGEsyRvnrE4bgLjXKRiipMBKGxQGFr+IT1Qj0iaLjCk7ir7KxnUpLMJMi5VBUpwS+BGI+LJlEzAOzZWMysawc74C25KGwYGP5IBQubOtn2VgiFAYuJ3ayzmrMozYjSS6qKb0K9CoFDD2ITUo5oWem7JRtp3ZWAlxnElBL5mOP2ExyHwMm7BMVCopNToRdJ24teTioprKF3nQwjv8/LaBjWMRy6EEdHAjXChY0cKgHWXONyckDmnYnmkRKJhUkZ2VIjdHLhQyRIKBESKQ4gtASeJBQsUroJlBAhHIbyHJ5aDeQTRqozpSD6LhFXRfsojEmmclcphpJpD3IWgOBYQIHpgAIyHRD2lYC0gGYYkKEsm1RIqkEVcDORsQSXAIi1bbTOScClB2mAJZvd4AiKNoGclnVUfFupU2qkUAyrEJhKeZXkUVcDzYUMd0ubQiGzeISbzuJbWjpjqJ2v1UiGwodNnWNecy2wQCvxCvgn2hzMUXsXoa+dJhPEXPd0GOQwCXFqqDUhyfkIgZZ59yGjW5uNQBBwZvPIR02s4gGpj0JwnwNnQtgwAWSFAh3SWAAiVzOaxLw13S+Aioq1FIJQOva3SS1I1QH4xJV2Rh4d3i9Ay6GtSrtumBsjd7jUqc8DNoURdurcgigp1rAoWSMgYTp8rqPiMlDlTRWlN+1NvNI6HlEsCO6m4nSNVmAMAlbgrUDkwdgUkDIdIdaLgltinO4gKFVfKqwSxSky5icAyw2jLC/9GxsOoA7mr9kLD1JLIsGClPf4ppf9MvqERYkkE0r2krL0hbjuRe1apQGmLgQq1+NYwgI810YVljBQACS0WNDe0YP7tx178DQbNtodBpi6HZSlzoM4E51WGPN+NLjlo4Nhp7zVfR03xLM1N4GDFkfpZP5UjJ9eqVrlwDi2SA+f2blQUCsGtjBnQEezbaVn1gpw2U0s5tYe0ZWmSTCcqIZIrze/CqWIvQ10Wiuf++l7jqHy+7mcslMBj7zqyzLhm9FMxXiSjq/nIRlnVY2nc3hdC9OZ55eZZsq0cAWBxfRo+knliII46WqnKiZyWTTvxBUBtC2dHoW9FQop5qQTfryi3Od2+TEnsFuLXy6uR0yG/1mh2AJlCmivkAg8F1AmXw5ly8XCmjn7tFT/R0/o2tman4UppesQLYGlgpnznJCv0btVv+aJbNgdClD01CXeUzhzkUKmdTLqJlPp9MLW64oKagHodEem7lSqXQVuxAZqqKnND+czuNMcbTDMrd/bT/MD0Koz/hO7qCa1agA1xeuKt/FtqSz5Rp0LoMwv39h61ra0GuXQJwpxt7uuLp8F2JHZ+u16FwGYmH/AlpL/X/C13a0+QqSKwPXlkZXlL5syFqdzGNmR7+G3wfTmUz/0M6hwogpB746zO7KkcFhNnTk7rNd+/AmQ3x2JK5urbewOVfqzRXsnImpWbFxvlS8E5dK2Qw2s9minS9m7szbvSfHGIla0OWoynnVpU9aFRy/iuB2D0Zz2+8f6tu9q//U1/AwNzkU4yiBovW5T2dHqIC+j1rxPQ6TVtDZ/h14uO/+6KlSLpunWbNgmul0sVKy8F0woZa6cbnbmnpMx++RR6Hn9YV8bvGz3/toT/y3Dv4XF7/xg77BF7686YHFe376jx0P5QcWTn7klRMDG6PTb33p8MsfeD/714033LZ4YP+Hjz68/oPjN1f6zj9v/qh/5MfP3HJh8J4/v/7eCw//8OLfLpy/WBmfPvrH238+vPndT1w//fnvHntne3nfbz82fduZTYfemR7E7x//1vFBe8sfHj/R+v2J1qG3b79j/vlnUz/ZN/PIyMXEja882Ohd2Lb5pebf7/7KP9/6XbG67dyfXrrhjU+ue63Y+6nSgx/f8O6Z1/KZ/3zn1czGian3N031lb99ZkP51mO9n5kwj+ILbz73xcVd33/kRXbT2bNDm27aFtUW58XEfTefa54/duHfGzTLdT1/Xb9+377renr+C7bLfqw=

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNqVVX1QFOcZPyVNIBPUpGDbyUe3ZzpUw97t3u19Qc4OHHhBQJCvYhql7+2+x663X+zH3QFSjXEytVh1zTRJaeq0ctxZQsAERdHaZExUEtO0TTK1YEpnNE2b6EzaVBtRE/vucRQY/ad7M/fus+/z/p7f8z6/93m3pqJQUTlJXDDAiRpUAK0hQzW2phTYpkNV25YUoMZKTKK2pr6hV1e48RWspslqkd0OZM4myVAEnI2WBHuUtNMs0OzoXeZhGiYRkpj2iQWbOq0CVFXQClVrEfb9TistoViiNm1o7TJEb1YNxjVrYWZENgNVWuFCENNYiMUgQIOCcSIyORXjBIRm7SrEZtenP7XoCm+CzBpFyGV6tM4Q12VeAowtxkU4ATIcsElKq920ZNMyExAQebvG6kLIztgZxh4My3iMUxFvlRNxATCcKok4YoSLQNMViIckoDAxwEdsG+VWu8PlJuQ4/v+tsnZ1rUfMFYlPp6OrULGmvwgSA9P8W2UNpyQzPRGZJBpVTYFAQEYY8CpMb54goyqa6OgrYfN0pVgIGFTjScuSBCupmjE4v25DgKYhAoYiLTGc2Gq81NrByYUYA8M80GA/Yi/CtCqM/giEMg54LgqT06uMA0CWeY4G5rx9I0pvIFNb3CzLrdP9Zlo4qo2oGQdrEImSCnttOxKYiJE2l8NGHIjjqgY4kUeCwXmA+CTl9PyxuRMyoCMIBM+I10hOLx6c6yOpRl81oGvq50EChWaNPqAIbmp47ndFFzWkBiMVqL01XGZyNpzTRhI26uV5wGq7SBt96UIcnrcYako7TksIw/gVkaQlKcJBY/yzlhY63BIS/FJN0NxgN88CHUCFKgct3tr21QF2VW2kLNgaaisPOIQaJlRVG8NJj8PncjpdDgdO2ggbaSNxqK9hInVOmi2tC5Y0xLxNVZUddL2LICuVCp0si4gdDmezs1J3hNnVG+ONgSb9sQZFktmasjW8qFVw4ShsgHG1qbG5NKgTnibtsQqajRVjiJ0e5Rg/KzhhKFi/RuDdIYWK+KBbXKW2rCu3rbM1lnYE25jaWGWJWF7C1FfPoUdRLpzIMHQTlJcwn8EZbfBQbNVYo9fpJfYrUJXRAYFPJdGWabq6NYF0CN8eS2Vaxr6aylkJL02UIU0ax6slsRBzkFgNrWEOwkFhpKfI5SsiSCxY3TAQyIRpuK0EX25QgKiGkQzLZySfolldjECmP3BbsR83xY4qadJHxxOHcVlSIZ5hZQw043XTzRKvKBuePlk4ailA5DrSYY3jadXHOuIxhtYZho3GBMLXQTlRd9Pp8MHMElmRzDCIEC4g7ZI+kiIGM3MzyutH2RI4SeAEeTSOo4MOeU7g0I6m/zM9WzUSLrTdR2510KQIRN09OV2P3851UKCAFGsGn0WhfD7fb27vNIPk8fkchO/ofCcVzuVCOgT1yK0OGYSEyyeoA/EZf5xjjPGHkdHi8RJunxOQTreTIhkv6SF9XsIXIimHz+FyAdcoan4cjXDMcsqSouEqpNEFpbUb44UCiJtdxu8kXU43SrUY3Rw0rzOwXg+VSWYSajEmK9C8CIYCq/AAoFFbrk8r0EiVrVtTUl0RGGnG50oJr5GnL8eUKKkiFw4n66GCCmP007ykM6hdKjCJsOpK1hkHvYzb6UC/cJhyUDTjwEtRI5pB+5/wEmavTQEecY/SxjDr9FuLKMppLcYE4Pe6KYJIX6FPJs1cxdaTC+/8Zne2Jf1k8XXV0jninuPXv7fk6KXzeVP8xAdP/+Jp8NqW3O8cANns9pFDY/seebd6eN/Nrt3PuL8x0b3o03/593RsWmHZEQtnnRrZP3Lx9Q8vPHf99OlznT2TXwxc+vzKtXPrb1yroDbtXfzl0PLhrS9+l7lSrsPRi1m//ns8IVgb3hocvnIjeqh54KMXjX/8ee3YK8mzb+4p+PfHm3w/m6xKni0AZyb3PJ97Y4XFsveUZ9fXH/T35Fad+Vpi2b38GL+h21J1fve9T1k35PX2fFb1+M7VbS9cC1YeKHj/2fuuZk/cd3XR/QsvdZ54+Jf35D+Zuy2HKNr9rVOBnM4hHGt40/NSaf6dC8cfPHX3tp7/YEsblxtZ98dl5SujjWN/W+nGAu/t3r/l7OKF2V1Y3o67Fy3+9h2v9spb/MnBOz7Nznr/hS136W/n/KF/8z8/+fhkfu9Xu/0lP5jMefTqCjj0aPUzPwrlrCyS2vj1F7IKBk60t53549QDPTuEnjc6hSHb5sK1/uyPXju/5ETO5h/+BJPfKxj88I1Xzx/atWFCsObtnDpWPupZdv0Jy42Heh/6svL3l7cf23D4reIfj8vun//lgdzfTR195/LGNutP81/v4/2P/LXtg6K6yyW2d042H8RHh6OPPzHw+cjFzqWn87v4ySN5BQXL7lr7ycihXbl9K6cuLPjTF9HRV87ufPZwd9/yvd2bUcFv3syyNDGD727Pslj+C1CB3ug=

File diff suppressed because one or more lines are too long

View File

@@ -1 +0,0 @@
eNqVVn1sU9cVd4DR0Q2BNm10Slk9M5UK5dnvy3acNGsSO5gQEpskREmqLFzfd5334vfF+4jtMG8t7R8VbJ1epMLUqa1IHbvLXKcosDIYdO3E1mls1dBWmjGBVHUaYk3VMlVlzSp237PdJIJ/5j9877n3nHN/55zfPfcdKk4gTRcUua4kyAbSADSwoFuHiho6YCLdeLIgIYNXuHw81tf/oqkJCzt4w1D1Jp8PqIJXUZEMBC9UJN8E5YM8MHx4rorIcZNPKFz2b2smD3okpOtgDOmeJvejBz1QwWfJRkUwsirCM4+BMoanoTpimUM61IQEchs8cqcRwIPmFmQsCrpbkLA3T67BvWzvLI2ammg7WRaasEpl9NSAm6qoAM6bFlKChDgBeBVtzGdLqi3ZAUgYvM/gTSnh43wc54smVSIt6Bi3LsiEBDhBV2QCIyJkYJgaIhIK0Lg0EFPecXXMR/sDpJoh/j8rTy43gpFriuiEY+pI8zgrksIhB/+YahCsYocnY5HCo25oCEhYSAJRR07yJBVX0faOV0lv0F5TFFFfneukKTuFtn19PrczJQPJUajme9S2tZUqxVCrep7IXWpjq6lAww4wj3THm6phfmiGgCpiTdGeI9m0cT/q0U1Zztq2UFRMzplpQMBLdug1vDhOQbYzZO9iYgoa4hzrmsuVykpiHEEDK+dGckUeAQ7DueranOcV3bDKq1k7ByBEOK1IhgqHj7BeHpsU1AY3h5IiMNAsrp2MnPRYsymEVAKIwgQqVKysV4CqigIE9r5vHBe3VGU2YWO5c3vWLiqBmSkb1skYBtHW6Ytn8fWS3ZTXT3vJVzKEbuDoRXxdCBFgPAXV2T+7ckMFMIWdENWraxUqxuWVOopuzXQDGOtb5RJokLdmgCYF2PmV65opG/guWMVw/M7jqpvLxzFeivSyJ1Y51rMytGYcGr66yhgZWpaACvZhHScLUFFSArIWbo6OwuRoQmpRYlE7wQGRByZAGtsBRhvj2d1hfmc8FYmOJQ50hGkpxiX2xNMEFaRDfobx0zRBeUkv5aUIZPZwqV4G8u290bb+dOPAnq5J2OcnqS6t06QiKXmSZgaZLpNO8rvHM/vCA+aufkxKPhbpEWWjU0hOoH6U0Qf2DbZHTTI4YOzqhHy62Y3RmRMC18JLDEpE+3okMZDQ2FQIBeSd+uhQh3fIu699MnqAi6e72uSONq6vewU8lvUTZBVhgGQbSftXrnFDRPKYwVsvBhj2JQ3pKm4P6IkCTplh6ofymIfo4pvFasOcjnUtU/jr+QjmpHWuW5Eb3DTljkHDTZM066aCTf5QE0W6o939pXD1mP67UvBEvwZkPYlp2FGjfBHyppxC3Gz4rmQ/Z5MdV9KGj5sTgTKqoiOiisoqDRK9laeC6IzMV24WgRsqkIVJ51jrnMP69GQmzUGT4/iJtESGJlkG9w8TJk9WTXCvsI/BgAhJt/IsSTPl6laNeLM4WJKgSIKkfml3BIjvmR2NqmgGoSOIXycjay00SCBjX7IWhvIzAZz5ZvxsQNHkUJ+ZiCgSpqbe7FY1ZL8CZzIEbpdIFCQBV8b5r758GIQfG5++U8FQUgi/kYVKXc+vVNCQ7d4OYtkLGwqFfnV3pZqnYChEk6Ezq5V0tBILRUv66TsVqh7y/pCklzI1fULgrIVvY2G0EaJgMgQDITZBBgAHko0sSNI0pGCSoZgQnAvvJMIA4jepzyGgVYwM9bR1d4Z/MUisZBIRUytfBkVZ0WUhmSz0IQ0Xxpp1+jbulhoqYF+9bUPWyUYuwNBMIpgASchCjibacR+qefucd3m71RaBiGs3Aa15nmnxNLEs42l2S6ClMcCSpPP98Hih0v0v1P30gSNfdDm/teLekdSV1s3fuzw3+MHQvVMNzSeufTrVOvzQk633zjz1sUCMaGfOXi2rez78gW/qka6b2iYmOM+cfWDDY/SPhh+/tPXwB0/88Xzu9tmlj25dOn/s9ZaB2PzsO5fmlq68ENv01ie+HVu/9v3Pdg/3zNYvLu3v/GH6nuFo+/E/nzpNHPv3gvhdYvv0JxsX9r69/vc7XngmO4T+M3LU9/Qbf6rvLg182FznuvWTq9R06vrIy+5YbFfs4jtTw889UvfS9cWnHuICDdcjM1t6D25/r/CXt7a+fjTedHj//bui+yfFDeS6N950f/PtdQ83vtoFUvm166QN20qLrb9Wmlsvu5aef8x/OJ5ff/Lj8G+Cl8PbpPW/PTFfX2p5d0n/0r+ubP7s2R//9xvX7hs7/fNt3zoYfDD5UVv3+9+pu8VIqVtK3Y7ItXt+duE1q37jkcTe+7/813+0bjnzsPXuFxZPvV/e85paqv8Ke3QuUGZvPP3skUCsfCP3/KfPXDg+fWBx+7H3ruczv2vPnaPKM/dNh/6ufvXGeHnLxn1/GJqZyt3c5HLdvr3WtebiqX9eXONy/Q8Vr/3y

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNrVVk9v3EQUByQuHPkEI4sTWm+8/+Lsoh5Cqcq/0KKkVVFbWWP72R5iz7gz42y20R4oPSOZT9CSKKmiliJAXKASRw58gXDgs/BmvJu027RFVEIQJbLz3rz3fu/fb3zrYAukYoK/ep9xDZJGGv9RX986kHCjAqVv7xegMxHvnj+3sVtJdvRWpnWpRktLtGRtVTCdtXPK0yijjLcjUSwxnoi9UMSTXw8yoDG6v314SYF0V1Pguv7RnLZ2bjlZ8tqddqfvP1yNIii1e45HImY8rR+kN1nZIjEkOdWw36jr72hZ5iyiBuPS50rww7OCc7CY68NNgNKlOduCexJUiWnAl/tKU12pW3voF37/7aAApWgK31z4aA7uz1fe3L8oaVrQ+h4XbkSjDH4y8ZRy0buWInfX6LZBX+8ue97PC7rVPBdj94JkKeP1nV9O1b7fhKrvvn26fs2W2Oi/v+K+RzWNRepuYDPA/SCuj8gwXg7BH/hd6PdpJxx2BivDOFnxw0HY8f2QHpFT3Z6VEGPJGc1VvadlBT9ssAKru4D54RXXmOfuue2SYeXqu97B/PXRRla1iNchH1JOOkPfI543sr/k/NrGfRMPI7gbkxKe7s4epgL1o3WqW6TbJWtUkq7XHZDO8qg3GPV94+LbuYuPgac6wxL73b3LVE7q/Qbfo4XUEJlQ8FhJD1Zz7a5vRfVRO+udcUb9fs95hxT0THcw7Hqe18p6bnd4imL3MqP1Ic4fSYVIc3hw1vR+Hqj+vqxCTKeFFtsuzsyZZc+O9Rc4UxKr+Mdrd3ac2fY4I8drD9u+77QcxnHmeAQBjm6qnNGOE+YiDJQWOGUQAKdhDrEzMg1pLeowYUBn6z10FOMgKNABbNOizEEFRZVrVlKpF528+ASuHi63hoCyAPdaThYPCJkGkQTbvCBmaqZMcHZQW9JJgU1aNCoxe8FpHqC1etpK9Z6VtQIqo+wpaSbGgdZ5ULG5SJslCDQDGcSVnKGjE1vWXPDUbDva97HNxlzqmaDTn7acsZCbqjQOVCRKMCgDxreYBnWM8abScYC0VeK0m04+iQmdhFQjUuw3kiEe5AlLTfBKwRPVjkuBBHqcSURzCKoyuKHYTcSPk5yCRFieBTrXcp1hyWMV5LiXaNxZnitjMeYBh6LUkxPrPmqNu/lp6+tYEIQTm1jXG/qdQdebTt94Nouvv4jF8Q+laulY6lK2pG7krp0eV02UhsItJVZOLxmOVvp/RPpf3Y9mtKNPZa4H9hpwoxkV/Acuhr9L8ftYdCSl+qDaYpGQ/HTKX2TdvU6v5+3FL6brf4+Nn+Ta17s7TjOSQUZVhhQ5iJejrgdDn3q0F0fLyXAw8DtD2lkZQLgyXPHjYRLBCvX6EHXixPd6/X6SDLv9oQ9hlIRIsAXlLMHBtWQS4UIh3+A4oHNsPAauImQvw+jIBled401ASTP3Ct9QovFxFh8XrXADl9aMr3O95WyOqWyugNkI4vvVl461bndvrfH4vKCN5ctkN/PQcp4XhvGy0sEWlcwwp0nRiXEgcUPRUIsy2DRPowvMt6G9ItHCONEzh0EiZIHJjZzEbTrunChReh73mxP8BLGWxHIa7mqL2CsLCCVqwrX5eEX2zCcEJ19ifLIzAzIllraIFkRW3DwyyEuSMB4TnaE9V2OQbXKJYwLKipDgJVElRCxhoDAwyZg6jmwiNrqI8KoI8axIyPweNg4mZMxUZkKJUGNdW4TmY7y5iCVrMhGVPAFFNSmEQry2XFOCVIZ3i2qTz0RFIsxbSNwvi2umIuEEMUjIYYtyjQnnVWETk6Arye1R69J+1hvQPD3Bx5oD5gsjxG+M9jV+jX8CyBszSNgOhJvbQ41rRRIcgcfTth1tEcGx3lRtNjbWIoHxIjJFUttCo57X0Ea9SNFaGy4yVUX8WPfG50lswmlhi0pt3WxBFMA8C4XEXFC8NlQkWWk9k3fRlkpIqtz45EI/ltgckfUXN1rYZkq3yWquRIuUi5jGGYuyORhmy9eIbA1sHhcMZIPdFkAYtjUVtwfU6BrfOVmAqTn/6awII7JjZ3rqTPHneuufb9N0evIhiFbXp38B114n3w==
eNrVV91u3MYVTnLRi9zlovcDIoDRgFxxydXfGgaqOIbTH8cupAQuYoOYHR6SE5Ez9MxQ67Wgizp5AeYJklqQAsNuirRIUbQBcpmLvoB60WfpmeFSstcbK4ETIBEkcDVnzjnf+fsO9/7xHijNpXj5ERcGFGUG/9Ef3z9WcKcBbT46qsAUMn1w9crOg0bxk9cLY2o9XlmhNR/oiptiUFKRs4JyMWCyWuEik4cTmc6+Pi6Apmj+o4fvalDBVg7CtH+3t51eUM9WwsFwMBytf77FGNQmuCKYTLnI28f5PV77JIWspAaOOnH7V1rXJWfUYlz5QEvx8LIUAhzm9uEuQB3Qku/BZwp0jWHAh0faUNPo+4doF/7zzXEFWtMc/nz9dz24/730y+Mrd2uOKu1XO0Xjk3BIfksFGW6uhyQMx+6XXL228/nNwOIog/76p+GD9zhtH2IIJJcyL+GLm8Fb1NBU5sEO5hKC36TtCYmGaQjra/Eai9jqBMMNV0dsLU43hmtDlqXrX1mzWgcYjFHS2Zcagrc7gO2nb3x508kwfcHOrIbgeu2q1B4LqQXPsr/00t+DyE3RPlhbj75cMHqN3rUVQFkY/nPbKM6MxSgwUcoE28CwuGbWnvgVXsQUXYqHqzHeDS8SLljZpLDdTN6SFZZZXyS1glLS9PA9qmbt0XXFcy5OyILLrbKU0+CyghSxcVrq9tCoBv619Fpno/3k30ul11wT2lz8o4+1hxzckNgTiPxKpmgFARWob6TS5IKGMrtA+oZd0qwXiZx8gP0TaMXIBSEFXDi6oWhe0fYzIQNGWQHHWyU622PtyaCIL3nj0Sj2LpKKXopWNyPMj1/EQbS5RLA8krOqHmKrAPZcA9hzG2SrViQKo1UyjMfRxjhyPffoybo/2/+PL1uEvYf2i7qZoNwnfQ3Xwr/t8AoHaiHJbj7/hMOhUPbfVz7Z9+Y04I09HMlwEHm+h4U2NpkJzmCuvfG+NynlJLG5RdsJCDopIfXGtqb+ogzdABrbjtFQiiOhwSRwl1Z1CTqpmtLwmiqzaOT8G8ghyFIGEsoTJCg1W7wgVZ4wBS5HScr1XJhh+6G0prMK07moVGP0UtAyQW39rJaOvy1qDVSx4pnTQk4TY8qk4f2RsXSQGA4qSRs1R0dnLq2lFLmlLdQfYT9ZdWXmB8PRge9NpdrVtTWgmazBoky42OMG9CnGe9qkCbY0Tqa2lXwaExqZUINIsd44HXhRZDy3zhsNT2U7rSVugtNIGC0haerkjub3ED/2Tw4KYYUOaC8VpsCUpzopsdtQebjWC1M5FYmAqjazM+0RSq25/razdXqQTGYusCjcXB+uRuHBwavfvo62z1tH+IeneuX0NKB8Rd8pA9c9gZ5pA1VQK8ycWbHLRpuf1fZ67bzl8F1peXGXPWJzq2Yp9fxYLP6r5Sy+hKkXt97hENfV89beEdYRea49bvY4k0ocps8j4OGmJeDl/PnYbYaAzXn3xXfFua8A5y2TH26lP7UafvHrfa+boKSgukBGRwM0hM00zMJsNcWPsLkxDGmEn2jMNlMWpxAPJzHFO6thtB6NopDFMYw2YA3iyO6DimJfYnUd9zGcf6RH7DE0jglFxw3DwtsFhOT1vnc6uHjSjanGT3hi8HEZHzfc4Q5yjJ0277bv7U6p6jbWfGLw8/sv7GvbUcW1zuLznHaaLxLd3ILvPc8NF3Vjkj2quCV6G6KX4jjjkKCikXWya59Wlth3cqts5oaSTKoKgxp7WdBV2jsT4ulVpCFB8CXY+SCOenHwfeI2KxBK9EwY+2UBSb6cEZwmhX7J/hzAAXHsSowkqhH2UUBZk4yLlJgC9YWeghqQdwUC1+4I95AiugbGMw4aHZOC61PP1mMnY0Q01QTvyoz0rwvWwIxMuS6sKzkxmE+f0HKKC5a4nUJmslFnoKghldSI16XpgCDj4grUA/JH2RCGcUuFc+VwzUVkMkMMOCKwR4XBgMumcoEpMI0S7qoz6b5GWdAiP8PHuwv2RWiCr0KDW+KWeAeQi+aQsBwIt3SXOtOaZFj6J8N2lfSJFJhvqnc7HaeRwXQRmSa5K6EV9zl0Xm9Q1DaWNm1WET/mvbN55psIZGCbVOry5hKiAfooNBJdRXG7aaa4WzYD8ibqUgVZU1qbQponAusROXtpJ4W7XJsB2Sq19Em9iGlacFb0YLhLX3fkcuDiuG4hW+wuAdKyoc24u6DHt8T+WeMfeAf447/w/L/dIHH9pMffHX2fOf/DvDXGZN/pdqm67X9vbvF735jo07d41Lp98H8ZDhN8

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
eNrVV09v3MYVb3voIcd+ggGRQ1MsVyT3n3YLHwRLcJxIlptdpwligxiSj+RkyRl6Zqj12thD3ZwLbD9BUgtSYNhNkQa5tAZ67KFfwD30s/TNcFd/1kpt5FAkggSS8+a9+b03v/eb0ePTI5CKCf7TZ4xrkDTW+KH++PhUwv0alP7spASdi+TJjb3Jk1qyl2/nWldqtLVFK9ZWJdN5u6A8i3PKeDsW5RbjqTiORDL/x2kONMHwnz29o0C6OxlwvfzGzLZ+bjXf8tp+2+8OvtqJY6i0u8djkTCeLZ9nD1nVIgmkBdVw0piXf6FVVbCYGoxbnyrBn14XnIPFvHw6BahcWrAj+FKCqjAN+P2J0lTX6vExxoV//fO0BKVoBn86fH8N7j8/+cXJbUmzki6/5MKNaZzDt2Y9pVyMrqUo3AP6wKBfPul73t82bDtFIWbuoWQZ48vP/36l9d1mqeUXv7rafmBLbOxff+TuUk0TkbkT3AxwbybLlySK++D1aeTFve0g8ob+9nYUdagXRd1h2uvEL8mVYa9LSLDkjBZqeaxlDX+dsBKru4H5q49c4164ew8qhpVbfuGdrl9fTPK6RTyfvEc58YcDj3jeyP6SGweTZ2Y9XMGdzCt4dXeOMRVYvhhT3SJBQA6oJIEX9IjfH3V6o15gQvx5HWIfeKZzLPEgOP6QyvnypMH3YiM1RCYUXCjp6U6h3fFRvHzZzjvXnFG323F+TUp6LegNA8/zWnnHDYZXGJ58yOjyKfKPZEJkBTy/bvZ+vdDy66qOMJ0WejxwkTPX+p6l9e+QUxKr+O+fff7IWXWPM3K89rA9GDgth3HkHI8hROpmyhk9cqJCRKHSAlkGIXAaFZA4I7MhrU0bJgwYbNzBQAkSQYEO4QEtqwJUWNaFZhWVejPI62dg62FzawgpC7Gv5XxzgpBZGEuwmxcmTK2MKXIHrRWdl7hJm04VZi84LUL0Vq96qc53Za2Ayjh/ZTQXs1DrIqzZekibJgg1AxkmtVyho3Nb1kLwzHQ7+ndxm4271KsBv7toOTMhp6oyAVQsKjAoQ8aPmAZ1hvGh0kmIslUh281OXsaEQSKqESnuN4ohTuQpy8zitYJL1U4qgQJ6lklMCwjrKryv2EPEj0zOQCIszwJdW7nOseSJCgvsS3T2+2tjImY85FBWen7u3UWrCbeebWOdDYTR3CYWeMOB3wu8xeKt71bx8etUHP9wVG2djbqUban7hekE7FY1VxpKt5JYOb1lNFrpH5Ho/+FZvJIdfaVyPbfHgBuvpOAHcDC8qcSfYNFRlJan9RGLheRXS/6m6h77/X7/OPmfcu0buf7/qfFlrf3540dOQ8kwpypHiez4fq+f9FLoDLs9PBAB/EHg9/zI2+4P/CAJgmg7HnjpdurHXs/rpb1ub9AbdrtBF5JOL0CBLSlnKRLX9DPDjv/EOWM7WhtuK3zDEY2P6/i4bQcn2JiGos69llPE2IkoVMgjRIWMQcR1jLKHHtMZlc0RsKIgvn/yRmuNbX8dNF7fd9Em6uuyW81qOd93Gb32GDkfi5pQCQQvC1YnsJcVyzgkRAuyvmCSGeoAoWT8m31izrgIT7n2XX4De5gbT8arWhOrW9iPLWKPJYxJ1Jxrc0FFhSzmBNktUQXIowTZjy8LYg82s5Ks0U3nGK4QYkqoNh8E9QEFWxGR2s9mNuUJGnQtuR2kXM1AIpo7HM9TZcdQ6iVRFcQsZaAMjuY9JrwuI7RhwPUJbBzmmKDKDQ4Raax2i9BihmcWsTJN5qKW51ARWykUZqFFFU4Xa5CIwNQyxnIIia11KYEIYeNHAUcUSxyLoi65Tfs8DxvTFtxUkWfnAFkz4ULhbwEKxgpRKiSiLeyUJrAiKfLiYtbaHJAtIjhuAlVT69PAu4RIkczu6KrWdjNXaeX0CEttlcTg1kIUyoY5+x8EEVuWbEA9NGviftjxCFDSGuc2uWQx/4LI0sr5qijIQKzahpdZ+4zUzb6kDK8z5ywwYA/ujCckEXgZxJLkEE8v7mAEuBJgdSGuLWqm2+RmaqaQDLShM0iJmc1yVlycR5sALcQ3k3gj2aCkNs/MHMZ3+V2+e0huHU5QLKeGoXOye7BPzBkH5lKmyC9v3hrvfTBpkTu3d3cmey2yu7e/Z58fHN4moOP2O7bMl2t5l0+EiSJt6gRvT3WRkJ393+58PL7UN3a/XyGOiagAMDHaBDBctQlg7F1hAaspq9CJKVwHKhyfmJ68sNqqDTCswtKWdN2blr5nfGoAtO3dGqUhPKKS2SEjaqvuR6PtICNba0EKGxagLqVuc444C/y592ZxFovzizVOuLf4Lw/1m1w=
eNrVV01v3MYZbnvoobceeh8sCrgplityv/Rh+CBYgutWstRonTqIDWJIDsmJyBl6Zqj12tChbv7A9hcktSAFht0UaZGiaAP02EP/gHrob+kzw119rJXISNpDDRnkzjvvO8/79bzD5ycHTGkuxXdfcWGYorHBD/3b5yeKPa6ZNh8dl8zkMnlxZ3P0olb89Me5MZVeW1qiFe/okpu8U1CRxTnlohPLcomLVB5FMpn8/SRnNIH5j17e10x56xkTZvonu9vpedVkye8EnaC//Nl6HLPKeJsilgkX2fR19pRXbZKwtKCGHTfi6R9oVRU8phbj0odaipe3pRDMYZ6+3Ges8mjBD9iniukKbrDfHGtDTa2fH8Eu++c/TkqmNc3Y73Z+MQf37+/86GTzScWhMv1ylNdt4gfk51SQYHXZJ76/5v7Ine3RZw88i6Pw5ts/8V+8x+n0JVwgmZRZwT5/4G1QQxOZeSPEknl3k+kpiZJuEKU9libdwbAXp92BHyVBN0qX/X7SH0ZfWrNae3DGKOnsS828nzUAp5/89IsHTobweaNJxbydymVpeiKkFjxNfz+XbjGRmXz6Yrjc/WLB6DZ9YjMAme//Zc8oHhuLUSBQynh7LEZyzWR62i6xESG61QsGPez1bxIu4qJO2F4dbcgSadY3SaVYIWly9B5Vk+nxjuIZF6dk4cj1opBj77ZiCbBxWujpkVE1++uV2xob04//dqV02xWhjcWf577OIXu7EjUB5JupoiXzqIC+kUqTG5oV6Q0yL9grivUmkdGHqB9Pq5jcEFKwG8e7imYlnX4qpBfTOGcn6wUOO4inp528d6u11u/3WjdJSW91B6tdxKed97zu6hWCqz05z+oRSoWh5mqGmlsh65UiXb87IEFvrbuyhhfU3KuLeX+z/l/ftgjnJ0w/r+oI8jaZ53Do/3HESzTUQpBdf/4azaEg+9f3Pn7WmtFAa62FlvQ73Va7hUQbG8wQPZjp1tqzVlTIKLSxhe2QCRoVLGmt2Zy2F2U4hsHYXg+GErSEZiZkT2hZFUyHZV0YXlFlFo1cvwMcApYyLKQ8BEGpyeIGqbIwVszFKEy4nglTlB+kFZ2UCOeiUgXvpaBFCG39ppbufZXXmlEV52+s5nIcGlOENZ8vGUsHoeFMhUmtZujoxIW1kCKztAX9PurJqiszWwj6h+3WWKp9XVkDOpYVsyhDLg64YfoM41NtkhAljc7UNpOXMcFIRA2QIt/oDmwUKc/s4bVml6KdVBKT4MyTmBYsrKvwseZPgR/1kzEFWL4DOpcKkyPkiQ4LVBuUg+FcmMixCAUrKzM51+5Das3NdztbZwthNHGOdf3V5WDQ9Q8Pf/DV42jvunGE/1jVS2erHuVL+nFh+wN9pSfasNKrFCJnluyw0eb/anr98Lrh8La0vDjLXsUzq+ZK6vlfsfg7V7P4FUy9OPWOguFw+HVj7xh5BM9NT+oDHksljpKvJeC+JeCr+fO1mwxePOPdbz8rrr0CXDdM/nsj/dJo+P7zZ62mg8Kc6hyM3guCwTAZpKy32h8EKyuMBcvdYBBE/spwOegm3W60Ei/76UoaxP7AH6SD/mB5sNrvd/ss6Q3sYCkp6hLZtfTDQVAftM6aE9KmFTXesGLwuI3HrlscgUdsR7UetVtFDOIAr6I4gQqZAOI6RsVAY39MVTOxZh2D9w/e6qw9RwfbjdY3PbSxep13s13t1jc9xsw11lrvy5pQxQiurY7WQD2aZ4IlxEgyv9iTMWiLULL3yy1iR3KEodx5KO6AcoTV5KKqDXE0iyZvEzdFYZPoiTD2wwCEXkwIOkehEcmzBNyBl0Pi5rA9SdVQMznMFVLuE2rsDwI6w3zRRKbuZ7ObigQCUyvhFlG0Y6aA5r7A+NduDZNJEV2xmKecaYujeY+JqMsIMhicXxiswgQO6tzikJFBtNuEFmOMWOKmCpnIWp1DBbZSanhhZBXuH85BAoGNZYxwSIXWuuRABNj4UbADihDHsqhL4dw+98PZdAG3URTZOUDebLgQ+HsMZDRDlEoFtIXb0hjWJEVdXPTa2HneJlIgCVTvO50G3iVEmmQuo7NYu2TO3MrpAULtiMTiNlIW2pk5+/YDYlclC1B37JnIh1uPwBPjRrlDLknsp58q3YyYBQUViKgtaNmzz4q6yUvKcfs6rwILdvv+3ogkEjdahCRn8f7FDEYMJzFEFyTnUHPTIXdTu4VkzNhyZkrBs3HOi4v7aGOgDXxjsCNbKEljn5m9OzwUD8XGDrm3MwJN79sKnZCN7S1iRzKzd0hNfnL33t7mu6M2ub+7sT7abJONza1N93x3Z5cwE3fecWG+HMuHYiStFeVcJ7js1UVC1rd+tf7+3qW+cfl+o3CsRc0YHKONAVurzgHY3pAOsN7nFZS4xjmswvrI9uSF02ZtALMaoS3pvDdd+Z7VUwOg4z4FQA3hAVXcLVlSm3U/hK6DLG3NCSlsqgC8lHrNHGkd4t+jt7NzeHj+HYANjw7/AxNYKrM=

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -6,5 +6,5 @@
- `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead.
- `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead.
- `BaseLLM` methods `__call__, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
- `BaseLLM` methods `__call__`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.

View File

@@ -15,7 +15,10 @@
* [Messages](/docs/concepts/messages)
:::
Multimodal support is still relatively new and less common, model providers have not yet standardized on the "best" way to define the API. As such, LangChain's multimodal abstractions are lightweight and flexible, designed to accommodate different model providers' APIs and interaction patterns, but are **not** standardized across models.
LangChain supports multimodal data as input to chat models:
1. Following provider-specific formats
2. Adhering to a cross-provider standard (see [how-to guides](/docs/how_to/#multimodal) for detail)
### How to use multimodal models
@@ -26,38 +29,85 @@ Multimodal support is still relatively new and less common, model providers have
#### Inputs
Some models can accept multimodal inputs, such as images, audio, video, or files. The types of multimodal inputs supported depend on the model provider. For instance, [Google's Gemini](/docs/integrations/chat/google_generative_ai/) supports documents like PDFs as inputs.
Some models can accept multimodal inputs, such as images, audio, video, or files.
The types of multimodal inputs supported depend on the model provider. For instance,
[OpenAI](/docs/integrations/chat/openai/),
[Anthropic](/docs/integrations/chat/anthropic/), and
[Google Gemini](/docs/integrations/chat/google_generative_ai/)
support documents like PDFs as inputs.
Most chat models that support **multimodal inputs** also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
The gist of passing multimodal inputs to a chat model is to use content blocks that specify a type and corresponding data. For example, to pass an image to a chat model:
The gist of passing multimodal inputs to a chat model is to use content blocks that
specify a type and corresponding data. For example, to pass an image to a chat model
as URL:
```python
from langchain_core.messages import HumanMessage
message = HumanMessage(
content=[
{"type": "text", "text": "describe the weather in this image"},
{"type": "text", "text": "Describe the weather in this image:"},
{
"type": "image",
"source_type": "url",
"url": "https://...",
},
],
)
response = model.invoke([message])
```
We can also pass the image as in-line data:
```python
from langchain_core.messages import HumanMessage
message = HumanMessage(
content=[
{"type": "text", "text": "Describe the weather in this image:"},
{
"type": "image",
"source_type": "base64",
"data": "<base64 string>",
"mime_type": "image/jpeg",
},
],
)
response = model.invoke([message])
```
To pass a PDF file as in-line data (or URL, as supported by providers such as
Anthropic), just change `"type"` to `"file"` and `"mime_type"` to `"application/pdf"`.
See the [how-to guides](/docs/how_to/#multimodal) for more detail.
Most chat models that support multimodal **image** inputs also accept those values in
OpenAI's [Chat Completions format](https://platform.openai.com/docs/guides/images?api-mode=chat):
```python
from langchain_core.messages import HumanMessage
message = HumanMessage(
content=[
{"type": "text", "text": "Describe the weather in this image:"},
{"type": "image_url", "image_url": {"url": image_url}},
],
)
response = model.invoke([message])
```
:::caution
The exact format of the content blocks may vary depending on the model provider. Please refer to the chat model's
integration documentation for the correct format. Find the integration in the [chat model integration table](/docs/integrations/chat/).
:::
Otherwise, chat models will typically accept the native, provider-specific content
block format. See [chat model integrations](/docs/integrations/chat/) for detail
on specific providers.
#### Outputs
Virtually no popular chat models support multimodal outputs at the time of writing (October 2024).
Some chat models support multimodal outputs, such as images and audio. Multimodal
outputs will appear as part of the [AIMessage](/docs/concepts/messages/#aimessage)
response object. See for example:
The only exception is OpenAI's chat model ([gpt-4o-audio-preview](/docs/integrations/chat/openai/)), which can generate audio outputs.
Multimodal outputs will appear as part of the [AIMessage](/docs/concepts/messages/#aimessage) response object.
Please see the [ChatOpenAI](/docs/integrations/chat/openai/) for more information on how to use multimodal outputs.
- Generating [audio outputs](/docs/integrations/chat/openai/#audio-generation-preview) with OpenAI;
- Generating [image outputs](/docs/integrations/chat/google_generative_ai/#image-generation) with Google Gemini.
#### Tools

View File

@@ -92,7 +92,7 @@ structured_model = model.with_structured_output(Questions)
# Define the system prompt
system = """You are a helpful assistant that generates multiple sub-questions related to an input question. \n
The goal is to break down the input into a set of sub-problems / sub-questions that can be answers in isolation. \n"""
The goal is to break down the input into a set of sub-problems / sub-questions that can be answered independently. \n"""
# Pass the question to the model
question = """What are the main components of an LLM-powered autonomous agent system?"""

View File

@@ -126,7 +126,7 @@ Please see the [Configurable Runnables](#configurable-runnables) section for mor
LangChain will automatically try to infer the input and output types of a Runnable based on available information.
Currently, this inference does not work well for more complex Runnables that are built using [LCEL](/docs/concepts/lcel) composition, and the inferred input and / or output types may be incorrect. In these cases, we recommend that users override the inferred input and output types using the `with_types` method ([API Reference](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_types
).
)).
## RunnableConfig
@@ -194,7 +194,7 @@ In Python 3.11 and above, this works out of the box, and you do not need to do a
In Python 3.9 and 3.10, if you are using **async code**, you need to manually pass the `RunnableConfig` through to the `Runnable` when invoking it.
This is due to a limitation in [asyncio's tasks](https://docs.python.org/3/library/asyncio-task.html#asyncio.create_task) in Python 3.9 and 3.10 which did
not accept a `context` argument).
not accept a `context` argument.
Propagating the `RunnableConfig` manually is done like so:

View File

@@ -83,7 +83,6 @@ LinkedIn, where we highlight the best examples.
Here are some heuristics for types of content we are excited to promote:
- **Integration announcement:** Announcements of new integrations with a link to the LangChain documentation page.
- **Educational content:** Blogs, YouTube videos and other media showcasing educational content. Note that we prefer content that is NOT framed as "here's how to use integration XYZ", but rather "here's how to do ABC", as we find that is more educational and helpful for developers.
- **End-to-end applications:** End-to-end applications are great resources for developers looking to build. We prefer to highlight applications that are more complex/agentic in nature, and that use [LangGraph](https://github.com/langchain-ai/langgraph) as the orchestration framework. We get particularly excited about anything involving long-term memory, human-in-the-loop interaction patterns, or multi-agent architectures.
- **Research:** We love highlighting novel research! Whether it is research built on top of LangChain or that integrates with it.

View File

@@ -16,7 +16,7 @@
"\n",
"Tracking [token](/docs/concepts/tokens/) usage to calculate cost is an important part of putting your app in production. This guide goes over how to obtain this information from your LangChain model calls.\n",
"\n",
"This guide requires `langchain-anthropic` and `langchain-openai >= 0.1.9`."
"This guide requires `langchain-anthropic` and `langchain-openai >= 0.3.11`."
]
},
{
@@ -38,19 +38,9 @@
"\n",
"OpenAI's Chat Completions API does not stream token usage statistics by default (see API reference\n",
"[here](https://platform.openai.com/docs/api-reference/completions/create#completions-create-stream_options)).\n",
"To recover token counts when streaming with `ChatOpenAI`, set `stream_usage=True` as\n",
"To recover token counts when streaming with `ChatOpenAI` or `AzureChatOpenAI`, set `stream_usage=True` as\n",
"demonstrated in this guide.\n",
"\n",
"For `AzureChatOpenAI`, set `stream_options={\"include_usage\": True}` when calling\n",
"`.(a)stream`, or initialize with:\n",
"\n",
"```python\n",
"AzureChatOpenAI(\n",
" ...,\n",
" model_kwargs={\"stream_options\": {\"include_usage\": True}},\n",
")\n",
"```\n",
"\n",
":::"
]
},
@@ -67,7 +57,7 @@
"\n",
"A number of model providers return token usage information as part of the chat generation response. When available, this information will be included on the `AIMessage` objects produced by the corresponding model.\n",
"\n",
"LangChain `AIMessage` objects include a [usage_metadata](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.usage_metadata) attribute. When populated, this attribute will be a [UsageMetadata](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.UsageMetadata.html) dictionary with standard keys (e.g., `\"input_tokens\"` and `\"output_tokens\"`).\n",
"LangChain `AIMessage` objects include a [usage_metadata](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.usage_metadata) attribute. When populated, this attribute will be a [UsageMetadata](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.UsageMetadata.html) dictionary with standard keys (e.g., `\"input_tokens\"` and `\"output_tokens\"`). They will also include information on cached token usage and tokens from multi-modal data.\n",
"\n",
"Examples:\n",
"\n",
@@ -92,9 +82,9 @@
}
],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from langchain.chat_models import init_chat_model\n",
"\n",
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"llm = init_chat_model(model=\"gpt-4o-mini\")\n",
"openai_response = llm.invoke(\"hello\")\n",
"openai_response.usage_metadata"
]
@@ -132,37 +122,6 @@
"anthropic_response.usage_metadata"
]
},
{
"cell_type": "markdown",
"id": "6d4efc15-ba9f-4b3d-9278-8e01f99f263f",
"metadata": {},
"source": [
"### Using AIMessage.response_metadata\n",
"\n",
"Metadata from the model response is also included in the AIMessage [response_metadata](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.response_metadata) attribute. These data are typically not standardized. Note that different providers adopt different conventions for representing token counts:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f156f9da-21f2-4c81-a714-54cbf9ad393e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OpenAI: {'completion_tokens': 9, 'prompt_tokens': 8, 'total_tokens': 17}\n",
"\n",
"Anthropic: {'input_tokens': 8, 'output_tokens': 12}\n"
]
}
],
"source": [
"print(f'OpenAI: {openai_response.response_metadata[\"token_usage\"]}\\n')\n",
"print(f'Anthropic: {anthropic_response.response_metadata[\"usage\"]}')"
]
},
{
"cell_type": "markdown",
"id": "b4ef2c43-0ff6-49eb-9782-e4070c9da8d7",
@@ -207,7 +166,7 @@
}
],
"source": [
"llm = ChatOpenAI(model=\"gpt-4o-mini\")\n",
"llm = init_chat_model(model=\"gpt-4o-mini\")\n",
"\n",
"aggregate = None\n",
"for chunk in llm.stream(\"hello\", stream_usage=True):\n",
@@ -318,7 +277,7 @@
" punchline: str = Field(description=\"answer to resolve the joke\")\n",
"\n",
"\n",
"llm = ChatOpenAI(\n",
"llm = init_chat_model(\n",
" model=\"gpt-4o-mini\",\n",
" stream_usage=True,\n",
")\n",
@@ -326,10 +285,10 @@
"# chat model and appends a parser.\n",
"structured_llm = llm.with_structured_output(Joke)\n",
"\n",
"async for event in structured_llm.astream_events(\"Tell me a joke\", version=\"v2\"):\n",
"async for event in structured_llm.astream_events(\"Tell me a joke\"):\n",
" if event[\"event\"] == \"on_chat_model_end\":\n",
" print(f'Token usage: {event[\"data\"][\"output\"].usage_metadata}\\n')\n",
" elif event[\"event\"] == \"on_chain_end\":\n",
" elif event[\"event\"] == \"on_chain_end\" and event[\"name\"] == \"RunnableSequence\":\n",
" print(event[\"data\"][\"output\"])\n",
" else:\n",
" pass"
@@ -350,17 +309,18 @@
"source": [
"## Using callbacks\n",
"\n",
"There are also some API-specific callback context managers that allow you to track token usage across multiple calls. They are currently only implemented for the OpenAI API and Bedrock Anthropic API, and are available in `langchain-community`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "64e52d21",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community"
":::info Requires ``langchain-core>=0.3.49``\n",
"\n",
":::\n",
"\n",
"LangChain implements a callback handler and context manager that will track token usage across calls of any chat model that returns `usage_metadata`.\n",
"\n",
"There are also some API-specific callback context managers that maintain pricing for different models, allowing for cost estimation in real time. They are currently only implemented for the OpenAI API and Bedrock Anthropic API, and are available in `langchain-community`:\n",
"\n",
"- [get_openai_callback](https://python.langchain.com/api_reference/community/callbacks/langchain_community.callbacks.manager.get_openai_callback.html)\n",
"- [get_bedrock_anthropic_callback](https://python.langchain.com/api_reference/community/callbacks/langchain_community.callbacks.manager.get_bedrock_anthropic_callback.html)\n",
"\n",
"Below, we demonstrate the general-purpose usage metadata callback manager. We can track token usage through configuration or as a context manager."
]
},
{
@@ -368,41 +328,84 @@
"id": "6f043cb9",
"metadata": {},
"source": [
"### OpenAI\n",
"### Tracking token usage through configuration\n",
"\n",
"Let's first look at an extremely simple example of tracking token usage for a single Chat model call."
"To track token usage through configuration, instantiate a `UsageMetadataCallbackHandler` and pass it into the config:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 17,
"id": "b04a4486-72fd-48ce-8f9e-5d281b441195",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'gpt-4o-mini-2024-07-18': {'input_tokens': 8,\n",
" 'output_tokens': 10,\n",
" 'total_tokens': 18,\n",
" 'input_token_details': {'audio': 0, 'cache_read': 0},\n",
" 'output_token_details': {'audio': 0, 'reasoning': 0}},\n",
" 'claude-3-5-haiku-20241022': {'input_tokens': 8,\n",
" 'output_tokens': 21,\n",
" 'total_tokens': 29,\n",
" 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}}"
]
},
"execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.chat_models import init_chat_model\n",
"from langchain_core.callbacks import UsageMetadataCallbackHandler\n",
"\n",
"llm_1 = init_chat_model(model=\"openai:gpt-4o-mini\")\n",
"llm_2 = init_chat_model(model=\"anthropic:claude-3-5-haiku-latest\")\n",
"\n",
"callback = UsageMetadataCallbackHandler()\n",
"result_1 = llm_1.invoke(\"Hello\", config={\"callbacks\": [callback]})\n",
"result_2 = llm_2.invoke(\"Hello\", config={\"callbacks\": [callback]})\n",
"callback.usage_metadata"
]
},
{
"cell_type": "markdown",
"id": "7a290085-e541-4233-afe4-637ec5032bfd",
"metadata": {},
"source": [
"### Tracking token usage using a context manager\n",
"\n",
"You can also use `get_usage_metadata_callback` to create a context manager and aggregate usage metadata there:"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "4728f55a-24e1-48cd-a195-09d037821b1e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 27\n",
"\tPrompt Tokens: 11\n",
"\tCompletion Tokens: 16\n",
"Successful Requests: 1\n",
"Total Cost (USD): $2.95e-05\n"
"{'gpt-4o-mini-2024-07-18': {'input_tokens': 8, 'output_tokens': 10, 'total_tokens': 18, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}}, 'claude-3-5-haiku-20241022': {'input_tokens': 8, 'output_tokens': 21, 'total_tokens': 29, 'input_token_details': {'cache_read': 0, 'cache_creation': 0}}}\n"
]
}
],
"source": [
"from langchain_community.callbacks.manager import get_openai_callback\n",
"from langchain.chat_models import init_chat_model\n",
"from langchain_core.callbacks import get_usage_metadata_callback\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-4o-mini\",\n",
" temperature=0,\n",
" stream_usage=True,\n",
")\n",
"llm_1 = init_chat_model(model=\"openai:gpt-4o-mini\")\n",
"llm_2 = init_chat_model(model=\"anthropic:claude-3-5-haiku-latest\")\n",
"\n",
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" print(cb)"
"with get_usage_metadata_callback() as cb:\n",
" llm_1.invoke(\"Hello\")\n",
" llm_2.invoke(\"Hello\")\n",
" print(cb.usage_metadata)"
]
},
{
@@ -410,61 +413,7 @@
"id": "c0ab6d27",
"metadata": {},
"source": [
"Anything inside the context manager will get tracked. Here's an example of using it to track multiple calls in sequence."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "05f22a1d-b021-490f-8840-f628a07459f2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"54\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" result2 = llm.invoke(\"Tell me a joke\")\n",
" print(cb.total_tokens)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "c00c9158-7bb4-4279-88e6-ea70f46e6ac2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 27\n",
"\tPrompt Tokens: 11\n",
"\tCompletion Tokens: 16\n",
"Successful Requests: 1\n",
"Total Cost (USD): $2.95e-05\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" for chunk in llm.stream(\"Tell me a joke\"):\n",
" pass\n",
" print(cb)"
]
},
{
"cell_type": "markdown",
"id": "d8186e7b",
"metadata": {},
"source": [
"If a chain or agent with multiple steps in it is used, it will track all those steps."
"Either of these methods will aggregate token usage across multiple calls to each model. For example, you can use it in an [agent](https://python.langchain.com/docs/concepts/agents/) to track token usage across repeated calls to one model:"
]
},
{
@@ -474,138 +423,63 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain langchain-aws wikipedia"
"%pip install -qU langgraph"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "5d1125c6",
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, create_tool_calling_agent, load_tools\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You're a helpful assistant\"),\n",
" (\"human\", \"{input}\"),\n",
" (\"placeholder\", \"{agent_scratchpad}\"),\n",
" ]\n",
")\n",
"tools = load_tools([\"wikipedia\"])\n",
"agent = create_tool_calling_agent(llm, tools, prompt)\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "3950d88b-8bfb-4294-b75b-e6fd421e633c",
"execution_count": 20,
"id": "fe945078-ee2d-43ba-8cdf-afb2f2f4ecef",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"================================\u001b[1m Human Message \u001b[0m=================================\n",
"\n",
"What's the weather in Boston?\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"Tool Calls:\n",
" get_weather (call_izMdhUYpp9Vhx7DTNAiybzGa)\n",
" Call ID: call_izMdhUYpp9Vhx7DTNAiybzGa\n",
" Args:\n",
" location: Boston\n",
"=================================\u001b[1m Tool Message \u001b[0m=================================\n",
"Name: get_weather\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `{'query': 'hummingbird scientific name'}`\n",
"It's sunny.\n",
"==================================\u001b[1m Ai Message \u001b[0m==================================\n",
"\n",
"The weather in Boston is sunny.\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: Hummingbird\n",
"Summary: Hummingbirds are birds native to the Americas and comprise the biological family Trochilidae. With approximately 366 species and 113 genera, they occur from Alaska to Tierra del Fuego, but most species are found in Central and South America. As of 2024, 21 hummingbird species are listed as endangered or critically endangered, with numerous species declining in population.\n",
"Hummingbirds have varied specialized characteristics to enable rapid, maneuverable flight: exceptional metabolic capacity, adaptations to high altitude, sensitive visual and communication abilities, and long-distance migration in some species. Among all birds, male hummingbirds have the widest diversity of plumage color, particularly in blues, greens, and purples. Hummingbirds are the smallest mature birds, measuring 7.513 cm (35 in) in length. The smallest is the 5 cm (2.0 in) bee hummingbird, which weighs less than 2.0 g (0.07 oz), and the largest is the 23 cm (9 in) giant hummingbird, weighing 1824 grams (0.630.85 oz). Noted for long beaks, hummingbirds are specialized for feeding on flower nectar, but all species also consume small insects.\n",
"They are known as hummingbirds because of the humming sound created by their beating wings, which flap at high frequencies audible to other birds and humans. They hover at rapid wing-flapping rates, which vary from around 12 beats per second in the largest species to 80 per second in small hummingbirds.\n",
"Hummingbirds have the highest mass-specific metabolic rate of any homeothermic animal. To conserve energy when food is scarce and at night when not foraging, they can enter torpor, a state similar to hibernation, and slow their metabolic rate to 115 of its normal rate. While most hummingbirds do not migrate, the rufous hummingbird has one of the longest migrations among birds, traveling twice per year between Alaska and Mexico, a distance of about 3,900 miles (6,300 km).\n",
"Hummingbirds split from their sister group, the swifts and treeswifts, around 42 million years ago. The oldest known fossil hummingbird is Eurotrochilus, from the Rupelian Stage of Early Oligocene Europe.\n",
"\n",
"Page: Rufous hummingbird\n",
"Summary: The rufous hummingbird (Selasphorus rufus) is a small hummingbird, about 8 cm (3.1 in) long with a long, straight and slender bill. These birds are known for their extraordinary flight skills, flying 2,000 mi (3,200 km) during their migratory transits. It is one of nine species in the genus Selasphorus.\n",
"\n",
"\n",
"\n",
"Page: Allen's hummingbird\n",
"Summary: Allen's hummingbird (Selasphorus sasin) is a species of hummingbird that breeds in the western United States. It is one of seven species in the genus Selasphorus.\u001b[0m\u001b[32;1m\u001b[1;3m\n",
"Invoking: `wikipedia` with `{'query': 'fastest bird species'}`\n",
"\n",
"\n",
"\u001b[0m\u001b[36;1m\u001b[1;3mPage: List of birds by flight speed\n",
"Summary: This is a list of the fastest flying birds in the world. A bird's velocity is necessarily variable; a hunting bird will reach much greater speeds while diving to catch prey than when flying horizontally. The bird that can achieve the greatest airspeed is the peregrine falcon (Falco peregrinus), able to exceed 320 km/h (200 mph) in its dives. A close relative of the common swift, the white-throated needletail (Hirundapus caudacutus), is commonly reported as the fastest bird in level flight with a reported top speed of 169 km/h (105 mph). This record remains unconfirmed as the measurement methods have never been published or verified. The record for the fastest confirmed level flight by a bird is 111.5 km/h (69.3 mph) held by the common swift.\n",
"\n",
"Page: Fastest animals\n",
"Summary: This is a list of the fastest animals in the world, by types of animal.\n",
"\n",
"Page: Falcon\n",
"Summary: Falcons () are birds of prey in the genus Falco, which includes about 40 species. Falcons are widely distributed on all continents of the world except Antarctica, though closely related raptors did occur there in the Eocene.\n",
"Adult falcons have thin, tapered wings, which enable them to fly at high speed and change direction rapidly. Fledgling falcons, in their first year of flying, have longer flight feathers, which make their configuration more like that of a general-purpose bird such as a broad wing. This makes flying easier while learning the exceptional skills required to be effective hunters as adults.\n",
"The falcons are the largest genus in the Falconinae subfamily of Falconidae, which itself also includes another subfamily comprising caracaras and a few other species. All these birds kill with their beaks, using a tomial \"tooth\" on the side of their beaks—unlike the hawks, eagles, and other birds of prey in the Accipitridae, which use their feet.\n",
"The largest falcon is the gyrfalcon at up to 65 cm in length. The smallest falcon species is the pygmy falcon, which measures just 20 cm. As with hawks and owls, falcons exhibit sexual dimorphism, with the females typically larger than the males, thus allowing a wider range of prey species.\n",
"Some small falcons with long, narrow wings are called \"hobbies\" and some which hover while hunting are called \"kestrels\".\n",
"As is the case with many birds of prey, falcons have exceptional powers of vision; the visual acuity of one species has been measured at 2.6 times that of a normal human. Peregrine falcons have been recorded diving at speeds of 320 km/h (200 mph), making them the fastest-moving creatures on Earth; the fastest recorded dive attained a vertical speed of 390 km/h (240 mph).\u001b[0m\u001b[32;1m\u001b[1;3mThe scientific name for a hummingbird is Trochilidae. The fastest bird species in level flight is the common swift, which holds the record for the fastest confirmed level flight by a bird at 111.5 km/h (69.3 mph). The peregrine falcon is known to exceed speeds of 320 km/h (200 mph) in its dives, making it the fastest bird in terms of diving speed.\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Total Tokens: 1675\n",
"Prompt Tokens: 1538\n",
"Completion Tokens: 137\n",
"Total Cost (USD): $0.0009745000000000001\n"
"Total usage: {'gpt-4o-mini-2024-07-18': {'input_token_details': {'audio': 0, 'cache_read': 0}, 'input_tokens': 125, 'total_tokens': 149, 'output_tokens': 24, 'output_token_details': {'audio': 0, 'reasoning': 0}}}\n"
]
}
],
"source": [
"with get_openai_callback() as cb:\n",
" response = agent_executor.invoke(\n",
" {\n",
" \"input\": \"What's a hummingbird's scientific name and what's the fastest bird species?\"\n",
" }\n",
" )\n",
" print(f\"Total Tokens: {cb.total_tokens}\")\n",
" print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
" print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
" print(f\"Total Cost (USD): ${cb.total_cost}\")"
]
},
{
"cell_type": "markdown",
"id": "ebc9122b-050b-4006-b763-264b0b26d9df",
"metadata": {},
"source": [
"### Bedrock Anthropic\n",
"from langgraph.prebuilt import create_react_agent\n",
"\n",
"The `get_bedrock_anthropic_callback` works very similarly:"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "1837c807-136a-49d8-9c33-060e58dc16d2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tokens Used: 96\n",
"\tPrompt Tokens: 26\n",
"\tCompletion Tokens: 70\n",
"Successful Requests: 2\n",
"Total Cost (USD): $0.001888\n"
]
}
],
"source": [
"from langchain_aws import ChatBedrock\n",
"from langchain_community.callbacks.manager import get_bedrock_anthropic_callback\n",
"\n",
"llm = ChatBedrock(model_id=\"anthropic.claude-v2\")\n",
"# Create a tool\n",
"def get_weather(location: str) -> str:\n",
" \"\"\"Get the weather at a location.\"\"\"\n",
" return \"It's sunny.\"\n",
"\n",
"with get_bedrock_anthropic_callback() as cb:\n",
" result = llm.invoke(\"Tell me a joke\")\n",
" result2 = llm.invoke(\"Tell me a joke\")\n",
" print(cb)"
"\n",
"callback = UsageMetadataCallbackHandler()\n",
"\n",
"tools = [get_weather]\n",
"agent = create_react_agent(\"openai:gpt-4o-mini\", tools)\n",
"for step in agent.stream(\n",
" {\"messages\": [{\"role\": \"user\", \"content\": \"What's the weather in Boston?\"}]},\n",
" stream_mode=\"values\",\n",
" config={\"callbacks\": [callback]},\n",
"):\n",
" step[\"messages\"][-1].pretty_print()\n",
"\n",
"\n",
"print(f\"\\nTotal usage: {callback.usage_metadata}\")"
]
},
{

View File

@@ -40,7 +40,7 @@
"\n",
"To view the list of separators for a given language, pass a value from this enum into\n",
"```python\n",
"RecursiveCharacterTextSplitter.get_separators_for_language`\n",
"RecursiveCharacterTextSplitter.get_separators_for_language\n",
"```\n",
"\n",
"To instantiate a splitter that is tailored for a specific language, pass a value from the enum into\n",

View File

@@ -50,6 +50,7 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
- [How to: force a specific tool call](/docs/how_to/tool_choice)
- [How to: work with local models](/docs/how_to/local_llms)
- [How to: init any model in one line](/docs/how_to/chat_models_universal_init/)
- [How to: pass multimodal data directly to models](/docs/how_to/multimodal_inputs/)
### Messages
@@ -67,6 +68,7 @@ See [supported integrations](/docs/integrations/chat/) for details on getting st
- [How to: use few shot examples in chat models](/docs/how_to/few_shot_examples_chat/)
- [How to: partially format prompt templates](/docs/how_to/prompts_partial)
- [How to: compose prompts together](/docs/how_to/prompts_composition)
- [How to: use multimodal prompts](/docs/how_to/multimodal_prompts/)
### Example selectors
@@ -170,7 +172,7 @@ Indexing is the process of keeping your vectorstore in-sync with the underlying
### Tools
LangChain [Tools](/docs/concepts/tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-buit tools.
LangChain [Tools](/docs/concepts/tools) contain a description of the tool (to pass to the language model) as well as the implementation of the function to call. Refer [here](/docs/integrations/tools/) for a list of pre-built tools.
- [How to: create tools](/docs/how_to/custom_tools)
- [How to: use built-in tools and toolkits](/docs/how_to/tools_builtin)
@@ -351,7 +353,7 @@ LangSmith allows you to closely trace, monitor and evaluate your LLM application
It seamlessly integrates with LangChain and LangGraph, and you can use it to inspect and debug individual steps of your chains and agents as you build.
LangSmith documentation is hosted on a separate site.
You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/), but we'll highlight a few sections that are particularly
You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/), but we'll highlight a few sections that are particularly
relevant to LangChain below:
### Evaluation

View File

@@ -5,120 +5,165 @@
"id": "4facdf7f-680e-4d28-908b-2b8408e2a741",
"metadata": {},
"source": [
"# How to pass multimodal data directly to models\n",
"# How to pass multimodal data to models\n",
"\n",
"Here we demonstrate how to pass [multimodal](/docs/concepts/multimodality/) input directly to models. \n",
"We currently expect all input to be passed in the same format as [OpenAI expects](https://platform.openai.com/docs/guides/vision).\n",
"For other model providers that support multimodal input, we have added logic inside the class to convert to the expected format.\n",
"Here we demonstrate how to pass [multimodal](/docs/concepts/multimodality/) input directly to models.\n",
"\n",
"In this example we will ask a [model](/docs/concepts/chat_models/#multimodality) to describe an image."
"LangChain supports multimodal data as input to chat models:\n",
"\n",
"1. Following provider-specific formats\n",
"2. Adhering to a cross-provider standard\n",
"\n",
"Below, we demonstrate the cross-provider standard. See [chat model integrations](/docs/integrations/chat/) for detail\n",
"on native formats for specific providers.\n",
"\n",
":::note\n",
"\n",
"Most chat models that support multimodal **image** inputs also accept those values in\n",
"OpenAI's [Chat Completions format](https://platform.openai.com/docs/guides/images?api-mode=chat):\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": image_url},\n",
"}\n",
"```\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "e30a4ff0-ab38-41a7-858c-a93f99bb2f1b",
"metadata": {},
"source": [
"## Images\n",
"\n",
"Many providers will accept images passed in-line as base64 data. Some will additionally accept an image from a URL directly.\n",
"\n",
"### Images from base64 data\n",
"\n",
"To pass images in-line, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"image\",\n",
" \"source_type\": \"base64\",\n",
" \"mime_type\": \"image/jpeg\", # or image/png, etc.\n",
" \"data\": \"<base64 data string>\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"execution_count": 10,
"id": "1fcf7b27-1cc3-420a-b920-0420b5892e20",
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image shows a beautiful clear day with bright blue skies and wispy cirrus clouds stretching across the horizon. The clouds are thin and streaky, creating elegant patterns against the blue backdrop. The lighting suggests it's during the day, possibly late afternoon given the warm, golden quality of the light on the grass. The weather appears calm with no signs of wind (the grass looks relatively still) and no indication of rain. It's the kind of perfect, mild weather that's ideal for walking along the wooden boardwalk through the marsh grass.\n"
]
}
],
"source": [
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\""
"import base64\n",
"\n",
"import httpx\n",
"from langchain.chat_models import init_chat_model\n",
"\n",
"# Fetch image data\n",
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")\n",
"\n",
"\n",
"# Pass to LLM\n",
"llm = init_chat_model(\"anthropic:claude-3-5-sonnet-latest\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the weather in this image:\",\n",
" },\n",
" # highlight-start\n",
" {\n",
" \"type\": \"image\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": image_data,\n",
" \"mime_type\": \"image/jpeg\",\n",
" },\n",
" # highlight-end\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "ee2b678a-01dd-40c1-81ff-ddac22be21b7",
"metadata": {},
"source": [
"See [LangSmith trace](https://smith.langchain.com/public/eab05a31-54e8-4fc9-911f-56805da67bef/r) for more detail.\n",
"\n",
"### Images from a URL\n",
"\n",
"Some providers (including [OpenAI](/docs/integrations/chat/openai/),\n",
"[Anthropic](/docs/integrations/chat/anthropic/), and\n",
"[Google Gemini](/docs/integrations/chat/google_generative_ai/)) will also accept images from URLs directly.\n",
"\n",
"To pass images as URLs, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"image\",\n",
" \"source_type\": \"url\",\n",
" \"url\": \"https://...\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fb896ce9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\")"
]
},
{
"cell_type": "markdown",
"id": "4fca4da7",
"metadata": {},
"source": [
"The most commonly supported way to pass in images is to pass it in as a byte string.\n",
"This should work for most model integrations."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9ca1040c",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ec680b6b",
"id": "99d27f8f-ae78-48bc-9bf2-3cef35213ec7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The weather in the image appears to be clear and pleasant. The sky is mostly blue with scattered, light clouds, suggesting a sunny day with minimal cloud cover. There is no indication of rain or strong winds, and the overall scene looks bright and calm. The lush green grass and clear visibility further indicate good weather conditions.\n"
"The weather in this image appears to be pleasant and clear. The sky is mostly blue with a few scattered, light clouds, and there is bright sunlight illuminating the green grass and plants. There are no signs of rain or stormy conditions, suggesting it is a calm, likely warm day—typical of spring or summer.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": f\"data:image/jpeg;base64,{image_data}\"},\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the weather in this image:\",\n",
" },\n",
" {\n",
" \"type\": \"image\",\n",
" # highlight-start\n",
" \"source_type\": \"url\",\n",
" \"url\": image_url,\n",
" # highlight-end\n",
" },\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "8656018e-c56d-47d2-b2be-71e87827f90a",
"metadata": {},
"source": [
"We can feed the image URL directly in a content block of type \"image_url\". Note that only some model providers support this."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a8819cf3-5ddc-44f0-889a-19ca7b7fe77e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The weather in the image appears to be clear and sunny. The sky is mostly blue with a few scattered clouds, suggesting good visibility and a likely pleasant temperature. The bright sunlight is casting distinct shadows on the grass and vegetation, indicating it is likely daytime, possibly late morning or early afternoon. The overall ambiance suggests a warm and inviting day, suitable for outdoor activities.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
@@ -126,12 +171,12 @@
"id": "1c470309",
"metadata": {},
"source": [
"We can also pass in multiple images."
"We can also pass in multiple images:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 4,
"id": "325fb4ca",
"metadata": {},
"outputs": [
@@ -139,20 +184,460 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Yes, the two images are the same. They both depict a wooden boardwalk extending through a grassy field under a blue sky with light clouds. The scenery, lighting, and composition are identical.\n"
"Yes, these two images are the same. They depict a wooden boardwalk going through a grassy field under a blue sky with some clouds. The colors, composition, and elements in both images are identical.\n"
]
}
],
"source": [
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"are these two images the same?\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\"type\": \"text\", \"text\": \"Are these two images the same?\"},\n",
" {\"type\": \"image\", \"source_type\": \"url\", \"url\": image_url},\n",
" {\"type\": \"image\", \"source_type\": \"url\", \"url\": image_url},\n",
" ],\n",
")\n",
"response = model.invoke([message])\n",
"print(response.content)"
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "d72b83e6-8d21-448e-b5df-d5b556c3ccc8",
"metadata": {},
"source": [
"## Documents (PDF)\n",
"\n",
"Some providers (including [OpenAI](/docs/integrations/chat/openai/),\n",
"[Anthropic](/docs/integrations/chat/anthropic/), and\n",
"[Google Gemini](/docs/integrations/chat/google_generative_ai/)) will accept PDF documents.\n",
"\n",
"### Documents from base64 data\n",
"\n",
"To pass documents in-line, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"file\",\n",
" \"source_type\": \"base64\",\n",
" \"mime_type\": \"application/pdf\",\n",
" \"data\": \"<base64 data string>\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6c1455a9-699a-4702-a7e0-7f6eaec76a21",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This document appears to be a sample PDF file that contains Lorem ipsum placeholder text. It begins with a title \"Sample PDF\" followed by the subtitle \"This is a simple PDF file. Fun fun fun.\"\n",
"\n",
"The rest of the document consists of several paragraphs of Lorem ipsum text, which is a commonly used placeholder text in design and publishing. The text is formatted in a clean, readable layout with consistent paragraph spacing. The document appears to be a single page containing four main paragraphs of this placeholder text.\n",
"\n",
"The Lorem ipsum text, while appearing to be Latin, is actually scrambled Latin-like text that is used primarily to demonstrate the visual form of a document or typeface without the distraction of meaningful content. It's commonly used in publishing and graphic design when the actual content is not yet available but the layout needs to be demonstrated.\n",
"\n",
"The document has a professional, simple layout with generous margins and clear paragraph separation, making it an effective example of basic PDF formatting and structure.\n"
]
}
],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"from langchain.chat_models import init_chat_model\n",
"\n",
"# Fetch PDF data\n",
"pdf_url = \"https://pdfobject.com/pdf/sample.pdf\"\n",
"pdf_data = base64.b64encode(httpx.get(pdf_url).content).decode(\"utf-8\")\n",
"\n",
"\n",
"# Pass to LLM\n",
"llm = init_chat_model(\"anthropic:claude-3-5-sonnet-latest\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the document:\",\n",
" },\n",
" # highlight-start\n",
" {\n",
" \"type\": \"file\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": pdf_data,\n",
" \"mime_type\": \"application/pdf\",\n",
" },\n",
" # highlight-end\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "efb271da-8fdd-41b5-9f29-be6f8c76f49b",
"metadata": {},
"source": [
"### Documents from a URL\n",
"\n",
"Some providers (specifically [Anthropic](/docs/integrations/chat/anthropic/))\n",
"will also accept documents from URLs directly.\n",
"\n",
"To pass documents as URLs, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"file\",\n",
" \"source_type\": \"url\",\n",
" \"url\": \"https://...\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "55e1d937-3b22-4deb-b9f0-9e688f0609dc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This document appears to be a sample PDF file with both text and an image. It begins with a title \"Sample PDF\" followed by the text \"This is a simple PDF file. Fun fun fun.\" The rest of the document contains Lorem ipsum placeholder text arranged in several paragraphs. The content is shown both as text and as an image of the formatted PDF, with the same content displayed in a clean, formatted layout with consistent spacing and typography. The document consists of a single page containing this sample text.\n"
]
}
],
"source": [
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the document:\",\n",
" },\n",
" {\n",
" \"type\": \"file\",\n",
" # highlight-start\n",
" \"source_type\": \"url\",\n",
" \"url\": pdf_url,\n",
" # highlight-end\n",
" },\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "1e661c26-e537-4721-8268-42c0861cb1e6",
"metadata": {},
"source": [
"## Audio\n",
"\n",
"Some providers (including [OpenAI](/docs/integrations/chat/openai/) and\n",
"[Google Gemini](/docs/integrations/chat/google_generative_ai/)) will accept audio inputs.\n",
"\n",
"### Audio from base64 data\n",
"\n",
"To pass audio in-line, format them as content blocks of the following form:\n",
"\n",
"```python\n",
"{\n",
" \"type\": \"audio\",\n",
" \"source_type\": \"base64\",\n",
" \"mime_type\": \"audio/wav\", # or appropriate mime-type\n",
" \"data\": \"<base64 data string>\",\n",
"}\n",
"```\n",
"\n",
"Example:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a0b91b29-dbd6-4c94-8f24-05471adc7598",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The audio appears to consist primarily of bird sounds, specifically bird vocalizations like chirping and possibly other bird songs.\n"
]
}
],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"from langchain.chat_models import init_chat_model\n",
"\n",
"# Fetch audio data\n",
"audio_url = \"https://upload.wikimedia.org/wikipedia/commons/3/3d/Alcal%C3%A1_de_Henares_%28RPS_13-04-2024%29_canto_de_ruise%C3%B1or_%28Luscinia_megarhynchos%29_en_el_Soto_del_Henares.wav\"\n",
"audio_data = base64.b64encode(httpx.get(audio_url).content).decode(\"utf-8\")\n",
"\n",
"\n",
"# Pass to LLM\n",
"llm = init_chat_model(\"google_genai:gemini-2.0-flash-001\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe this audio:\",\n",
" },\n",
" # highlight-start\n",
" {\n",
" \"type\": \"audio\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": audio_data,\n",
" \"mime_type\": \"audio/wav\",\n",
" },\n",
" # highlight-end\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "92f55a6c-2e4a-4175-8444-8b9aacd6a13e",
"metadata": {},
"source": [
"## Provider-specific parameters\n",
"\n",
"Some providers will support or require additional fields on content blocks containing multimodal data.\n",
"For example, Anthropic lets you specify [caching](/docs/integrations/chat/anthropic/#prompt-caching) of\n",
"specific content to reduce token consumption.\n",
"\n",
"To use these fields, you can:\n",
"\n",
"1. Store them on directly on the content block; or\n",
"2. Use the native format supported by each provider (see [chat model integrations](/docs/integrations/chat/) for detail).\n",
"\n",
"We show three examples below.\n",
"\n",
"### Example: Anthropic prompt caching"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "83593b9d-a8d3-4c99-9dac-64e0a9d397cb",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image shows a beautiful, clear day with partly cloudy skies. The sky is a vibrant blue with wispy, white cirrus clouds stretching across it. The lighting suggests it's during daylight hours, possibly late afternoon or early evening given the warm, golden quality of the light on the grass. The weather appears calm with no signs of wind (the grass looks relatively still) and no threatening weather conditions. It's the kind of perfect weather you'd want for a walk along this wooden boardwalk through the marshland or grassland area.\n"
]
},
{
"data": {
"text/plain": [
"{'input_tokens': 1586,\n",
" 'output_tokens': 117,\n",
" 'total_tokens': 1703,\n",
" 'input_token_details': {'cache_read': 0, 'cache_creation': 1582}}"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm = init_chat_model(\"anthropic:claude-3-5-sonnet-latest\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the weather in this image:\",\n",
" },\n",
" {\n",
" \"type\": \"image\",\n",
" \"source_type\": \"url\",\n",
" \"url\": image_url,\n",
" # highlight-next-line\n",
" \"cache_control\": {\"type\": \"ephemeral\"},\n",
" },\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())\n",
"response.usage_metadata"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9bbf578e-794a-4dc0-a469-78c876ccd4a3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Clear blue skies, wispy clouds.\n"
]
},
{
"data": {
"text/plain": [
"{'input_tokens': 1716,\n",
" 'output_tokens': 12,\n",
" 'total_tokens': 1728,\n",
" 'input_token_details': {'cache_read': 1582, 'cache_creation': 0}}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"next_message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Summarize that in 5 words.\",\n",
" }\n",
" ],\n",
"}\n",
"response = llm.invoke([message, response, next_message])\n",
"print(response.text())\n",
"response.usage_metadata"
]
},
{
"cell_type": "markdown",
"id": "915b9443-5964-43b8-bb08-691c1ba59065",
"metadata": {},
"source": [
"### Example: Anthropic citations"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ea7707a1-5660-40a1-a10f-0df48a028689",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'citations': [{'cited_text': 'Sample PDF\\r\\nThis is a simple PDF file. Fun fun fun.\\r\\n',\n",
" 'document_index': 0,\n",
" 'document_title': None,\n",
" 'end_page_number': 2,\n",
" 'start_page_number': 1,\n",
" 'type': 'page_location'}],\n",
" 'text': 'Simple PDF file: fun fun',\n",
" 'type': 'text'}]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Generate a 5 word summary of this document.\",\n",
" },\n",
" {\n",
" \"type\": \"file\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": pdf_data,\n",
" \"mime_type\": \"application/pdf\",\n",
" # highlight-next-line\n",
" \"citations\": {\"enabled\": True},\n",
" },\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"response.content"
]
},
{
"cell_type": "markdown",
"id": "e26991eb-e769-41f4-b6e0-63d81f2c7d67",
"metadata": {},
"source": [
"### Example: OpenAI file names\n",
"\n",
"OpenAI requires that PDF documents be associated with file names:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ae076c9b-ff8f-461d-9349-250f396c9a25",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The document is a sample PDF file containing placeholder text. It consists of one page, titled \"Sample PDF\". The content is a mixture of English and the commonly used filler text \"Lorem ipsum dolor sit amet...\" and its extensions, which are often used in publishing and web design as generic text to demonstrate font, layout, and other visual elements.\n",
"\n",
"**Key points about the document:**\n",
"- Length: 1 page\n",
"- Purpose: Demonstrative/sample content\n",
"- Content: No substantive or meaningful information, just demonstration text in paragraph form\n",
"- Language: English (with the Latin-like \"Lorem Ipsum\" text used for layout purposes)\n",
"\n",
"There are no charts, tables, diagrams, or images on the page—only plain text. The document serves as an example of what a PDF file looks like rather than providing actual, useful content.\n"
]
}
],
"source": [
"llm = init_chat_model(\"openai:gpt-4.1\")\n",
"\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"Describe the document:\",\n",
" },\n",
" {\n",
" \"type\": \"file\",\n",
" \"source_type\": \"base64\",\n",
" \"data\": pdf_data,\n",
" \"mime_type\": \"application/pdf\",\n",
" # highlight-next-line\n",
" \"filename\": \"my-file\",\n",
" },\n",
" ],\n",
"}\n",
"response = llm.invoke([message])\n",
"print(response.text())"
]
},
{
@@ -167,16 +652,22 @@
},
{
"cell_type": "code",
"execution_count": 8,
"id": "cd22ea82-2f93-46f9-9f7a-6aaf479fcaa9",
"execution_count": 4,
"id": "0f68cce7-350b-4cde-bc40-d3a169551fc3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[{'name': 'weather_tool', 'args': {'weather': 'sunny'}, 'id': 'call_BSX4oq4SKnLlp2WlzDhToHBr'}]\n"
]
"data": {
"text/plain": [
"[{'name': 'weather_tool',\n",
" 'args': {'weather': 'sunny'},\n",
" 'id': 'toolu_01G6JgdkhwggKcQKfhXZQPjf',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
@@ -191,16 +682,17 @@
" pass\n",
"\n",
"\n",
"model_with_tools = model.bind_tools([weather_tool])\n",
"llm_with_tools = llm.bind_tools([weather_tool])\n",
"\n",
"message = HumanMessage(\n",
" content=[\n",
" {\"type\": \"text\", \"text\": \"describe the weather in this image\"},\n",
" {\"type\": \"image_url\", \"image_url\": {\"url\": image_url}},\n",
"message = {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\"type\": \"text\", \"text\": \"Describe the weather in this image:\"},\n",
" {\"type\": \"image\", \"source_type\": \"url\", \"url\": image_url},\n",
" ],\n",
")\n",
"response = model_with_tools.invoke([message])\n",
"print(response.tool_calls)"
"}\n",
"response = llm_with_tools.invoke([message])\n",
"response.tool_calls"
]
}
],
@@ -220,7 +712,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -9,157 +9,148 @@
"\n",
"Here we demonstrate how to use prompt templates to format [multimodal](/docs/concepts/multimodality/) inputs to models. \n",
"\n",
"In this example we will ask a [model](/docs/concepts/chat_models/#multimodality) to describe an image."
"To use prompt templates in the context of multimodal data, we can templatize elements of the corresponding content block.\n",
"For example, below we define a prompt that takes a URL for an image as a parameter:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "0d9fd81a-b7f0-445a-8e3d-cfc2d31fdd59",
"metadata": {},
"outputs": [],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"image_data = base64.b64encode(httpx.get(image_url).content).decode(\"utf-8\")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 1,
"id": "2671f995",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model=\"gpt-4o\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "4ee35e4f",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
"# Define prompt\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\"system\", \"Describe the image provided\"),\n",
" (\n",
" \"user\",\n",
" [\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": \"Describe the image provided.\",\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data}\"},\n",
" }\n",
" \"type\": \"image\",\n",
" \"source_type\": \"url\",\n",
" # highlight-next-line\n",
" \"url\": \"{image_url}\",\n",
" },\n",
" ],\n",
" ),\n",
" },\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "089f75c2",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "02744b06",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The image depicts a sunny day with a beautiful blue sky filled with scattered white clouds. The sky has varying shades of blue, ranging from a deeper hue near the horizon to a lighter, almost pale blue higher up. The white clouds are fluffy and scattered across the expanse of the sky, creating a peaceful and serene atmosphere. The lighting and cloud patterns suggest pleasant weather conditions, likely during the daytime hours on a mild, sunny day in an outdoor natural setting.\n"
]
}
],
"source": [
"response = chain.invoke({\"image_data\": image_data})\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "e9b9ebf6",
"id": "f75d2e26-5b9a-4d5f-94a7-7f98f5666f6d",
"metadata": {},
"source": [
"We can also pass in multiple images."
"Let's use this prompt to pass an image to a [chat model](/docs/concepts/chat_models/#multimodality):"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "02190ee3",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"compare the two pictures provided\"),\n",
" (\n",
" \"user\",\n",
" [\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data1}\"},\n",
" },\n",
" {\n",
" \"type\": \"image_url\",\n",
" \"image_url\": {\"url\": \"data:image/jpeg;base64,{image_data2}\"},\n",
" },\n",
" ],\n",
" ),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "42af057b",
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | model"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "513abe00",
"execution_count": 2,
"id": "5df2e558-321d-4cf7-994e-2815ac37e704",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The two images provided are identical. Both images feature a wooden boardwalk path extending through a lush green field under a bright blue sky with some clouds. The perspective, colors, and elements in both images are exactly the same.\n"
"This image shows a beautiful wooden boardwalk cutting through a lush green wetland or marsh area. The boardwalk extends straight ahead toward the horizon, creating a strong leading line through the composition. On either side, tall green grasses sway in what appears to be a summer or late spring setting. The sky is particularly striking, with wispy cirrus clouds streaking across a vibrant blue background. In the distance, you can see a tree line bordering the wetland area. The lighting suggests this may be during \"golden hour\" - either early morning or late afternoon - as there's a warm, gentle quality to the light that's illuminating the scene. The wooden planks of the boardwalk appear well-maintained and provide safe passage through what would otherwise be difficult terrain to traverse. It's the kind of scene you might find in a nature preserve or wildlife refuge designed to give visitors access to observe wetland ecosystems while protecting the natural environment.\n"
]
}
],
"source": [
"response = chain.invoke({\"image_data1\": image_data, \"image_data2\": image_data})\n",
"print(response.content)"
"from langchain.chat_models import init_chat_model\n",
"\n",
"llm = init_chat_model(\"anthropic:claude-3-5-sonnet-latest\")\n",
"\n",
"url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\n",
"\n",
"chain = prompt | llm\n",
"response = chain.invoke({\"image_url\": url})\n",
"print(response.text())"
]
},
{
"cell_type": "markdown",
"id": "f4cfdc50-4a9f-4888-93b4-af697366b0f3",
"metadata": {},
"source": [
"Note that we can templatize arbitrary elements of the content block:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "53c88ebb-dd57-40c8-8542-b2c916706653",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate(\n",
" [\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": \"Describe the image provided.\",\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"image\",\n",
" \"source_type\": \"base64\",\n",
" \"mime_type\": \"{image_mime_type}\",\n",
" \"data\": \"{image_data}\",\n",
" \"cache_control\": {\"type\": \"{cache_type}\"},\n",
" },\n",
" ],\n",
" },\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "25e4829e-0073-49a8-9669-9f43e5778383",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This image shows a beautiful wooden boardwalk cutting through a lush green marsh or wetland area. The boardwalk extends straight ahead toward the horizon, creating a strong leading line in the composition. The surrounding vegetation consists of tall grass and reeds in vibrant green hues, with some bushes and trees visible in the background. The sky is particularly striking, featuring a bright blue color with wispy white clouds streaked across it. The lighting suggests this photo was taken during the \"golden hour\" - either early morning or late afternoon - giving the scene a warm, peaceful quality. The raised wooden path provides accessible access through what would otherwise be difficult terrain to traverse, allowing visitors to experience and appreciate this natural environment.\n"
]
}
],
"source": [
"import base64\n",
"\n",
"import httpx\n",
"\n",
"image_data = base64.b64encode(httpx.get(url).content).decode(\"utf-8\")\n",
"\n",
"chain = prompt | llm\n",
"response = chain.invoke(\n",
" {\n",
" \"image_data\": image_data,\n",
" \"image_mime_type\": \"image/jpeg\",\n",
" \"cache_type\": \"ephemeral\",\n",
" }\n",
")\n",
"print(response.text())"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ea8152c3",
"id": "424defe8-d85c-4e45-a88d-bf6f910d5ebb",
"metadata": {},
"outputs": [],
"source": []
@@ -181,7 +172,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -127,7 +127,7 @@
"id": "c89e2045-9244-43e6-bf3f-59af22658529",
"metadata": {},
"source": [
"Now that we've got a [model](/docs/concepts/chat_models/), [retriver](/docs/concepts/retrievers/) and [prompt](/docs/concepts/prompt_templates/), let's chain them all together. Following the how-to guide on [adding citations](/docs/how_to/qa_citations) to a RAG application, we'll make it so our chain returns both the answer and the retrieved Documents. This uses the same [LangGraph](/docs/concepts/architecture/#langgraph) implementation as in the [RAG Tutorial](/docs/tutorials/rag)."
"Now that we've got a [model](/docs/concepts/chat_models/), [retriever](/docs/concepts/retrievers/) and [prompt](/docs/concepts/prompt_templates/), let's chain them all together. Following the how-to guide on [adding citations](/docs/how_to/qa_citations) to a RAG application, we'll make it so our chain returns both the answer and the retrieved Documents. This uses the same [LangGraph](/docs/concepts/architecture/#langgraph) implementation as in the [RAG Tutorial](/docs/tutorials/rag)."
]
},
{

View File

@@ -270,7 +270,7 @@
"source": [
"## Retrieval with query analysis\n",
"\n",
"So how would we include this in a chain? One thing that will make this a lot easier is if we call our retriever asyncronously - this will let us loop over the queries and not get blocked on the response time."
"So how would we include this in a chain? One thing that will make this a lot easier is if we call our retriever asynchronously - this will let us loop over the queries and not get blocked on the response time."
]
},
{

View File

@@ -24,7 +24,7 @@
"\n",
"Note that the map step is typically parallelized over the input documents. This strategy is especially effective when understanding of a sub-document does not rely on preceeding context. For example, when summarizing a corpus of many, shorter documents.\n",
"\n",
"[LangGraph](https://langchain-ai.github.io/langgraph/), built on top of `langchain-core`, suports [map-reduce](https://langchain-ai.github.io/langgraph/how-tos/map-reduce/) workflows and is well-suited to this problem:\n",
"[LangGraph](https://langchain-ai.github.io/langgraph/), built on top of `langchain-core`, supports [map-reduce](https://langchain-ai.github.io/langgraph/how-tos/map-reduce/) workflows and is well-suited to this problem:\n",
"\n",
"- LangGraph allows for individual steps (such as successive summarizations) to be streamed, allowing for greater control of execution;\n",
"- LangGraph's [checkpointing](https://langchain-ai.github.io/langgraph/how-tos/persistence/) supports error recovery, extending with human-in-the-loop workflows, and easier incorporation into conversational applications.\n",

View File

@@ -15,7 +15,7 @@
"\n",
"To build a production application, you will need to do more work to keep track of application state appropriately.\n",
"\n",
"We recommend using `langgraph` for powering such a capability. For more details, please see this [guide](https://langchain-ai.github.io/langgraph/how-tos/human-in-the-loop/).\n",
"We recommend using `langgraph` for powering such a capability. For more details, please see this [guide](https://langchain-ai.github.io/langgraph/concepts/human_in_the_loop/).\n",
":::\n"
]
},
@@ -209,7 +209,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"Do you approve of the following tool invocations\n",
@@ -252,7 +252,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"Do you approve of the following tool invocations\n",

View File

@@ -0,0 +1,118 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# SingleStoreSemanticCache\n",
"\n",
"This example demonstrates how to get started with the SingleStore semantic cache.\n",
"\n",
"### Integration Overview\n",
"\n",
"`SingleStoreSemanticCache` leverages `SingleStoreVectorStore` to cache LLM responses directly in a SingleStore database, enabling efficient semantic retrieval and reuse of results.\n",
"\n",
"### Integration details\n",
"\n",
"\n",
"\n",
"| Class | Package | JS support |\n",
"| :--- | :--- | :---: |\n",
"| SingleStoreSemanticCache | langchain_singlestore | ❌ | "
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"## Installation\n",
"\n",
"This cache lives in the `langchain-singlestore` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-singlestore"
]
},
{
"cell_type": "markdown",
"id": "5c5f2839-4020-424e-9fc9-07777eede442",
"metadata": {},
"source": [
"## Usage"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "51a60dbe-9f2e-4e04-bb62-23968f17164a",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.globals import set_llm_cache\n",
"from langchain_singlestore import SingleStoreSemanticCache\n",
"\n",
"set_llm_cache(\n",
" SingleStoreSemanticCache(\n",
" embedding=YourEmbeddings(),\n",
" host=\"root:pass@localhost:3306/db\",\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cddda8ef",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The first time, it is not yet in cache, so it should take longer\n",
"llm.invoke(\"Tell me a joke\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c474168f",
"metadata": {},
"outputs": [],
"source": [
"%%time\n",
"# The second time, while not a direct hit, the question is semantically similar to the original question,\n",
"# so it uses the cached result!\n",
"llm.invoke(\"Tell me one joke\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "langchain-singlestore-BD1RbQ07-py3.11",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.11"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,318 +1,316 @@
{
"cells": [
{
"cell_type": "raw",
"id": "4cebeec0",
"metadata": {},
"source": [
"---\n",
"sidebar_label: AI21 Labs\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatAI21\n",
"\n",
"## Overview\n",
"\n",
"This notebook covers how to get started with AI21 chat models.\n",
"Note that different chat models support different parameters. See the [AI21 documentation](https://docs.ai21.com/reference) to learn more about the parameters in your chosen model.\n",
"[See all AI21's LangChain components.](https://pypi.org/project/langchain-ai21/) \n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/__package_name_short_snake__) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatAI21](https://python.langchain.com/api_reference/ai21/chat_models/langchain_ai21.chat_models.ChatAI21.html#langchain_ai21.chat_models.ChatAI21) | [langchain-ai21](https://python.langchain.com/api_reference/ai21/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-ai21?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-ai21?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"\n",
"## Setup"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"### Credentials\n",
"\n",
"We'll need to get an [AI21 API key](https://docs.ai21.com/) and set the `AI21_API_KEY` environment variable:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"AI21_API_KEY\" not in os.environ:\n",
" os.environ[\"AI21_API_KEY\"] = getpass()"
]
},
{
"cell_type": "markdown",
"id": "f6844fff-3702-4489-ab74-732f69f3b9d7",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7c2e19d3-7c58-4470-9e1a-718b27a32056",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "98e22f31-8acc-42d6-916d-415d1263c56e",
"metadata": {},
"source": [
"### Installation"
]
},
{
"cell_type": "markdown",
"id": "f9699cd9-58f2-450e-aa64-799e66906c0f",
"metadata": {},
"source": [
"!pip install -qU langchain-ai21"
]
},
{
"cell_type": "markdown",
"id": "4828829d3da430ce",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
"cells": [
{
"cell_type": "raw",
"id": "4cebeec0",
"metadata": {},
"source": [
"---\n",
"sidebar_label: AI21 Labs\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatAI21\n",
"\n",
"## Overview\n",
"\n",
"This notebook covers how to get started with AI21 chat models.\n",
"Note that different chat models support different parameters. See the [AI21 documentation](https://docs.ai21.com/reference) to learn more about the parameters in your chosen model.\n",
"[See all AI21's LangChain components.](https://pypi.org/project/langchain-ai21/)\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/__package_name_short_snake__) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatAI21](https://python.langchain.com/api_reference/ai21/chat_models/langchain_ai21.chat_models.ChatAI21.html#langchain_ai21.chat_models.ChatAI21) | [langchain-ai21](https://python.langchain.com/api_reference/ai21/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-ai21?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-ai21?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"\n",
"## Setup"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"### Credentials\n",
"\n",
"We'll need to get an [AI21 API key](https://docs.ai21.com/) and set the `AI21_API_KEY` environment variable:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"if \"AI21_API_KEY\" not in os.environ:\n",
" os.environ[\"AI21_API_KEY\"] = getpass()"
]
},
{
"cell_type": "markdown",
"id": "f6844fff-3702-4489-ab74-732f69f3b9d7",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": null,
"id": "7c2e19d3-7c58-4470-9e1a-718b27a32056",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "98e22f31-8acc-42d6-916d-415d1263c56e",
"metadata": {},
"source": [
"### Installation"
]
},
{
"cell_type": "markdown",
"id": "f9699cd9-58f2-450e-aa64-799e66906c0f",
"metadata": {},
"source": [
"!pip install -qU langchain-ai21"
]
},
{
"cell_type": "markdown",
"id": "4828829d3da430ce",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c40756fb-cbf8-4d44-a293-3989d707237e",
"metadata": {},
"outputs": [],
"source": [
"from langchain_ai21 import ChatAI21\n",
"\n",
"llm = ChatAI21(model=\"jamba-instruct\", temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "2bdc5d68-2a19-495e-8c04-d11adc86d3ae",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "46b982dc-5d8a-46da-a711-81c03ccd6adc",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "markdown",
"id": "10a30f84-b531-4fd5-8b5b-91512fbdc75b",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "39353473fce5dd2e",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "39c0ccd229927eab",
"metadata": {},
"source": "# Tool Calls / Function Calling"
},
{
"cell_type": "markdown",
"id": "2bf6b40be07fe2d4",
"metadata": {},
"source": "This example shows how to use tool calling with AI21 models:"
},
{
"cell_type": "code",
"execution_count": null,
"id": "a181a28df77120fb",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"from langchain_ai21.chat_models import ChatAI21\n",
"from langchain_core.messages import HumanMessage, SystemMessage, ToolMessage\n",
"from langchain_core.tools import tool\n",
"from langchain_core.utils.function_calling import convert_to_openai_tool\n",
"\n",
"if \"AI21_API_KEY\" not in os.environ:\n",
" os.environ[\"AI21_API_KEY\"] = getpass()\n",
"\n",
"\n",
"@tool\n",
"def get_weather(location: str, date: str) -> str:\n",
" \"\"\"“Provide the weather for the specified location on the given date.”\"\"\"\n",
" if location == \"New York\" and date == \"2024-12-05\":\n",
" return \"25 celsius\"\n",
" elif location == \"New York\" and date == \"2024-12-06\":\n",
" return \"27 celsius\"\n",
" elif location == \"London\" and date == \"2024-12-05\":\n",
" return \"22 celsius\"\n",
" return \"32 celsius\"\n",
"\n",
"\n",
"llm = ChatAI21(model=\"jamba-1.5-mini\")\n",
"\n",
"llm_with_tools = llm.bind_tools([convert_to_openai_tool(get_weather)])\n",
"\n",
"chat_messages = [\n",
" SystemMessage(\n",
" content=\"You are a helpful assistant. You can use the provided tools \"\n",
" \"to assist with various tasks and provide accurate information\"\n",
" )\n",
"]\n",
"\n",
"human_messages = [\n",
" HumanMessage(\n",
" content=\"What is the forecast for the weather in New York on December 5, 2024?\"\n",
" ),\n",
" HumanMessage(content=\"And what about the 2024-12-06?\"),\n",
" HumanMessage(content=\"OK, thank you.\"),\n",
" HumanMessage(content=\"What is the expected weather in London on December 5, 2024?\"),\n",
"]\n",
"\n",
"\n",
"for human_message in human_messages:\n",
" print(f\"User: {human_message.content}\")\n",
" chat_messages.append(human_message)\n",
" response = llm_with_tools.invoke(chat_messages)\n",
" chat_messages.append(response)\n",
" if response.tool_calls:\n",
" tool_call = response.tool_calls[0]\n",
" if tool_call[\"name\"] == \"get_weather\":\n",
" weather = get_weather.invoke(\n",
" {\n",
" \"location\": tool_call[\"args\"][\"location\"],\n",
" \"date\": tool_call[\"args\"][\"date\"],\n",
" }\n",
" )\n",
" chat_messages.append(\n",
" ToolMessage(content=weather, tool_call_id=tool_call[\"id\"])\n",
" )\n",
" llm_answer = llm_with_tools.invoke(chat_messages)\n",
" print(f\"Assistant: {llm_answer.content}\")\n",
" else:\n",
" print(f\"Assistant: {response.content}\")"
]
},
{
"cell_type": "markdown",
"id": "e79de691-9dd6-4697-b57e-59a4a3cc073a",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatAI21 features and configurations head to the API reference: https://python.langchain.com/api_reference/ai21/chat_models/langchain_ai21.chat_models.ChatAI21.html"
]
}
},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c40756fb-cbf8-4d44-a293-3989d707237e",
"metadata": {},
"outputs": [],
"source": [
"from langchain_ai21 import ChatAI21\n",
"\n",
"llm = ChatAI21(model=\"jamba-instruct\", temperature=0)"
]
},
{
"cell_type": "markdown",
"id": "2bdc5d68-2a19-495e-8c04-d11adc86d3ae",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "46b982dc-5d8a-46da-a711-81c03ccd6adc",
"metadata": {},
"outputs": [],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "markdown",
"id": "10a30f84-b531-4fd5-8b5b-91512fbdc75b",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "39353473fce5dd2e",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "39c0ccd229927eab",
"metadata": {},
"source": "# Tool Calls / Function Calling"
},
{
"cell_type": "markdown",
"id": "2bf6b40be07fe2d4",
"metadata": {},
"source": "This example shows how to use tool calling with AI21 models:"
},
{
"cell_type": "code",
"execution_count": null,
"id": "a181a28df77120fb",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"from langchain_ai21.chat_models import ChatAI21\n",
"from langchain_core.messages import HumanMessage, SystemMessage, ToolMessage\n",
"from langchain_core.tools import tool\n",
"from langchain_core.utils.function_calling import convert_to_openai_tool\n",
"\n",
"if \"AI21_API_KEY\" not in os.environ:\n",
" os.environ[\"AI21_API_KEY\"] = getpass()\n",
"\n",
"\n",
"@tool\n",
"def get_weather(location: str, date: str) -> str:\n",
" \"\"\"“Provide the weather for the specified location on the given date.”\"\"\"\n",
" if location == \"New York\" and date == \"2024-12-05\":\n",
" return \"25 celsius\"\n",
" elif location == \"New York\" and date == \"2024-12-06\":\n",
" return \"27 celsius\"\n",
" elif location == \"London\" and date == \"2024-12-05\":\n",
" return \"22 celsius\"\n",
" return \"32 celsius\"\n",
"\n",
"\n",
"llm = ChatAI21(model=\"jamba-1.5-mini\")\n",
"\n",
"llm_with_tools = llm.bind_tools([convert_to_openai_tool(get_weather)])\n",
"\n",
"chat_messages = [\n",
" SystemMessage(\n",
" content=\"You are a helpful assistant. You can use the provided tools \"\n",
" \"to assist with various tasks and provide accurate information\"\n",
" )\n",
"]\n",
"\n",
"human_messages = [\n",
" HumanMessage(\n",
" content=\"What is the forecast for the weather in New York on December 5, 2024?\"\n",
" ),\n",
" HumanMessage(content=\"And what about the 2024-12-06?\"),\n",
" HumanMessage(content=\"OK, thank you.\"),\n",
" HumanMessage(content=\"What is the expected weather in London on December 5, 2024?\"),\n",
"]\n",
"\n",
"\n",
"for human_message in human_messages:\n",
" print(f\"User: {human_message.content}\")\n",
" chat_messages.append(human_message)\n",
" response = llm_with_tools.invoke(chat_messages)\n",
" chat_messages.append(response)\n",
" if response.tool_calls:\n",
" tool_call = response.tool_calls[0]\n",
" if tool_call[\"name\"] == \"get_weather\":\n",
" weather = get_weather.invoke(\n",
" {\n",
" \"location\": tool_call[\"args\"][\"location\"],\n",
" \"date\": tool_call[\"args\"][\"date\"],\n",
" }\n",
" )\n",
" chat_messages.append(\n",
" ToolMessage(content=weather, tool_call_id=tool_call[\"id\"])\n",
" )\n",
" llm_answer = llm_with_tools.invoke(chat_messages)\n",
" print(f\"Assistant: {llm_answer.content}\")\n",
" else:\n",
" print(f\"Assistant: {response.content}\")"
]
},
{
"cell_type": "markdown",
"id": "e79de691-9dd6-4697-b57e-59a4a3cc073a",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatAI21 features and configurations head to the API reference: https://python.langchain.com/api_reference/ai21/chat_models/langchain_ai21.chat_models.ChatAI21.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,349 +1,347 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Azure OpenAI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# AzureChatOpenAI\n",
"\n",
"This guide will help you get started with AzureOpenAI [chat models](/docs/concepts/chat_models). For detailed documentation of all AzureChatOpenAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.azure.AzureChatOpenAI.html).\n",
"\n",
"Azure OpenAI has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Azure docs](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).\n",
"\n",
":::info Azure OpenAI vs OpenAI\n",
"\n",
"Azure OpenAI refers to OpenAI models hosted on the [Microsoft Azure platform](https://azure.microsoft.com/en-us/products/ai-services/openai-service). OpenAI also provides its own model APIs. To access OpenAI services directly, use the [ChatOpenAI integration](/docs/integrations/chat/openai/).\n",
"\n",
":::\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/azure) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [AzureChatOpenAI](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.azure.AzureChatOpenAI.html) | [langchain-openai](https://python.langchain.com/api_reference/openai/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-openai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-openai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | \n",
"\n",
"## Setup\n",
"\n",
"To access AzureOpenAI models you'll need to create an Azure account, create a deployment of an Azure OpenAI model, get the name and endpoint for your deployment, get an Azure OpenAI API key, and install the `langchain-openai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to the [Azure docs](https://learn.microsoft.com/en-us/azure/ai-services/openai/chatgpt-quickstart?tabs=command-line%2Cpython-new&pivots=programming-language-python) to create your deployment and generate an API key. Once you've done this set the AZURE_OPENAI_API_KEY and AZURE_OPENAI_ENDPOINT environment variables:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"AZURE_OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"AZURE_OPENAI_API_KEY\"] = getpass.getpass(\n",
" \"Enter your AzureOpenAI API key: \"\n",
" )\n",
"os.environ[\"AZURE_OPENAI_ENDPOINT\"] = \"https://YOUR-ENDPOINT.openai.azure.com/\""
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain AzureOpenAI integration lives in the `langchain-openai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions.\n",
"- Replace `azure_deployment` with the name of your deployment,\n",
"- You can find the latest supported `api_version` here: https://learn.microsoft.com/en-us/azure/ai-services/openai/reference."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import AzureChatOpenAI\n",
"\n",
"llm = AzureChatOpenAI(\n",
" azure_deployment=\"gpt-35-turbo\", # or your deployment\n",
" api_version=\"2023-06-01-preview\", # or your api version\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
"cells": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 31, 'total_tokens': 39}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-bea4b46c-e3e1-4495-9d3a-698370ad963d-0', usage_metadata={'input_tokens': 31, 'output_tokens': 8, 'total_tokens': 39})"
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Azure OpenAI\n",
"---"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 26, 'total_tokens': 32}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-cbc44038-09d3-40d4-9da2-c5910ee636ca-0', usage_metadata={'input_tokens': 26, 'output_tokens': 6, 'total_tokens': 32})"
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# AzureChatOpenAI\n",
"\n",
"This guide will help you get started with AzureOpenAI [chat models](/docs/concepts/chat_models). For detailed documentation of all AzureChatOpenAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.azure.AzureChatOpenAI.html).\n",
"\n",
"Azure OpenAI has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in the [Azure docs](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models).\n",
"\n",
":::info Azure OpenAI vs OpenAI\n",
"\n",
"Azure OpenAI refers to OpenAI models hosted on the [Microsoft Azure platform](https://azure.microsoft.com/en-us/products/ai-services/openai-service). OpenAI also provides its own model APIs. To access OpenAI services directly, use the [ChatOpenAI integration](/docs/integrations/chat/openai/).\n",
"\n",
":::\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/azure) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [AzureChatOpenAI](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.azure.AzureChatOpenAI.html) | [langchain-openai](https://python.langchain.com/api_reference/openai/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-openai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-openai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ |\n",
"\n",
"## Setup\n",
"\n",
"To access AzureOpenAI models you'll need to create an Azure account, create a deployment of an Azure OpenAI model, get the name and endpoint for your deployment, get an Azure OpenAI API key, and install the `langchain-openai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to the [Azure docs](https://learn.microsoft.com/en-us/azure/ai-services/openai/chatgpt-quickstart?tabs=command-line%2Cpython-new&pivots=programming-language-python) to create your deployment and generate an API key. Once you've done this set the AZURE_OPENAI_API_KEY and AZURE_OPENAI_ENDPOINT environment variables:"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
"## Specifying model version\n",
"\n",
"Azure OpenAI responses contain `model_name` response metadata property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the specific version of the model, which is set on the deployment in Azure. E.g. it does not distinguish between `gpt-35-turbo-0125` and `gpt-35-turbo-0301`. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with `OpenAICallbackHandler`.\n",
"\n",
"To solve this problem, you can pass `model_version` parameter to `AzureChatOpenAI` class, which will be added to the model name in the llm output. This way you can easily distinguish between different versions of the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "04b36e75-e8b7-4721-899e-76301ac2ecd9",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2ca02d23-60d0-43eb-8d04-070f61f8fefd",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total Cost (USD): $0.000063\n"
]
}
],
"source": [
"from langchain_community.callbacks import get_openai_callback\n",
"\n",
"with get_openai_callback() as cb:\n",
" llm.invoke(messages)\n",
" print(\n",
" f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\"\n",
" ) # without specifying the model version, flat-rate 0.002 USD per 1k input and output tokens is used"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e1b07ae2-3de7-44bd-bfdc-b76f4ba45a35",
"metadata": {},
"outputs": [
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"AZURE_OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"AZURE_OPENAI_API_KEY\"] = getpass.getpass(\n",
" \"Enter your AzureOpenAI API key: \"\n",
" )\n",
"os.environ[\"AZURE_OPENAI_ENDPOINT\"] = \"https://YOUR-ENDPOINT.openai.azure.com/\""
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total Cost (USD): $0.000074\n"
]
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain AzureOpenAI integration lives in the `langchain-openai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions.\n",
"- Replace `azure_deployment` with the name of your deployment,\n",
"- You can find the latest supported `api_version` here: https://learn.microsoft.com/en-us/azure/ai-services/openai/reference."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import AzureChatOpenAI\n",
"\n",
"llm = AzureChatOpenAI(\n",
" azure_deployment=\"gpt-35-turbo\", # or your deployment\n",
" api_version=\"2023-06-01-preview\", # or your api version\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 31, 'total_tokens': 39}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-bea4b46c-e3e1-4495-9d3a-698370ad963d-0', usage_metadata={'input_tokens': 31, 'output_tokens': 8, 'total_tokens': 39})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 26, 'total_tokens': 32}, 'model_name': 'gpt-35-turbo', 'system_fingerprint': None, 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'stop', 'logprobs': None, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}, id='run-cbc44038-09d3-40d4-9da2-c5910ee636ca-0', usage_metadata={'input_tokens': 26, 'output_tokens': 6, 'total_tokens': 32})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
"## Specifying model version\n",
"\n",
"Azure OpenAI responses contain `model_name` response metadata property, which is name of the model used to generate the response. However unlike native OpenAI responses, it does not contain the specific version of the model, which is set on the deployment in Azure. E.g. it does not distinguish between `gpt-35-turbo-0125` and `gpt-35-turbo-0301`. This makes it tricky to know which version of the model was used to generate the response, which as result can lead to e.g. wrong total cost calculation with `OpenAICallbackHandler`.\n",
"\n",
"To solve this problem, you can pass `model_version` parameter to `AzureChatOpenAI` class, which will be added to the model name in the llm output. This way you can easily distinguish between different versions of the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "04b36e75-e8b7-4721-899e-76301ac2ecd9",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2ca02d23-60d0-43eb-8d04-070f61f8fefd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total Cost (USD): $0.000063\n"
]
}
],
"source": [
"from langchain_community.callbacks import get_openai_callback\n",
"\n",
"with get_openai_callback() as cb:\n",
" llm.invoke(messages)\n",
" print(\n",
" f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\"\n",
" ) # without specifying the model version, flat-rate 0.002 USD per 1k input and output tokens is used"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e1b07ae2-3de7-44bd-bfdc-b76f4ba45a35",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Total Cost (USD): $0.000074\n"
]
}
],
"source": [
"llm_0301 = AzureChatOpenAI(\n",
" azure_deployment=\"gpt-35-turbo\", # or your deployment\n",
" api_version=\"2023-06-01-preview\", # or your api version\n",
" model_version=\"0301\",\n",
")\n",
"with get_openai_callback() as cb:\n",
" llm_0301.invoke(messages)\n",
" print(f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all AzureChatOpenAI features and configurations head to the API reference: https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.azure.AzureChatOpenAI.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
],
"source": [
"llm_0301 = AzureChatOpenAI(\n",
" azure_deployment=\"gpt-35-turbo\", # or your deployment\n",
" api_version=\"2023-06-01-preview\", # or your api version\n",
" model_version=\"0301\",\n",
")\n",
"with get_openai_callback() as cb:\n",
" llm_0301.invoke(messages)\n",
" print(f\"Total Cost (USD): ${format(cb.total_cost, '.6f')}\")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all AzureChatOpenAI features and configurations head to the API reference: https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.azure.AzureChatOpenAI.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -19,9 +19,15 @@
"\n",
"This doc will help you get started with AWS Bedrock [chat models](/docs/concepts/chat_models). Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon via a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. Using Amazon Bedrock, you can easily experiment with and evaluate top FMs for your use case, privately customize them with your data using techniques such as fine-tuning and Retrieval Augmented Generation (RAG), and build agents that execute tasks using your enterprise systems and data sources. Since Amazon Bedrock is serverless, you don't have to manage any infrastructure, and you can securely integrate and deploy generative AI capabilities into your applications using the AWS services you are already familiar with.\n",
"\n",
"For more information on which models are accessible via Bedrock, head to the [AWS docs](https://docs.aws.amazon.com/bedrock/latest/userguide/models-features.html).\n",
"AWS Bedrock maintains a [Converse API](https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_Converse.html) which provides a unified conversational interface for Bedrock models. This API does not yet support custom models. You can see a list of all [models that are supported here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html).\n",
"\n",
"For detailed documentation of all ChatBedrock features and configurations head to the [API reference](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock.ChatBedrock.html).\n",
":::info\n",
"\n",
"We recommend the Converse API for users who do not need to use custom models. It can be accessed using [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html).\n",
"\n",
":::\n",
"\n",
"For detailed documentation of all Bedrock features and configurations head to the [API reference](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
@@ -29,11 +35,15 @@
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/bedrock) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatBedrock](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock.ChatBedrock.html) | [langchain-aws](https://python.langchain.com/api_reference/aws/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-aws?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-aws?style=flat-square&label=%20) |\n",
"| [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) | [langchain-aws](https://python.langchain.com/api_reference/aws/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-aws?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-aws?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"\n",
"The below apply to both `ChatBedrock` and `ChatBedrockConverse`.\n",
"\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ | \n",
"| ✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
@@ -49,7 +59,7 @@
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
]
},
{
@@ -100,11 +110,12 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_aws import ChatBedrock\n",
"from langchain_aws import ChatBedrockConverse\n",
"\n",
"llm = ChatBedrock(\n",
" model_id=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n",
" model_kwargs=dict(temperature=0),\n",
"llm = ChatBedrockConverse(\n",
" model_id=\"anthropic.claude-3-5-sonnet-20240620-v1:0\",\n",
" # temperature=...,\n",
" # max_tokens=...,\n",
" # other params...\n",
")"
]
@@ -119,19 +130,17 @@
},
{
"cell_type": "code",
"execution_count": 5,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"execution_count": 2,
"id": "fcd8de52-4a1b-4875-b463-d41b031e06a1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Voici la traduction en français :\\n\\nJ'aime la programmation.\", additional_kwargs={'usage': {'prompt_tokens': 29, 'completion_tokens': 21, 'total_tokens': 50}, 'stop_reason': 'end_turn', 'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0'}, response_metadata={'usage': {'prompt_tokens': 29, 'completion_tokens': 21, 'total_tokens': 50}, 'stop_reason': 'end_turn', 'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0'}, id='run-fdb07dc3-ff72-430d-b22b-e7824b15c766-0', usage_metadata={'input_tokens': 29, 'output_tokens': 21, 'total_tokens': 50})"
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'ResponseMetadata': {'RequestId': 'b07d1630-06f2-44b1-82bf-e82538dd2215', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Wed, 16 Apr 2025 19:35:34 GMT', 'content-type': 'application/json', 'content-length': '206', 'connection': 'keep-alive', 'x-amzn-requestid': 'b07d1630-06f2-44b1-82bf-e82538dd2215'}, 'RetryAttempts': 0}, 'stopReason': 'end_turn', 'metrics': {'latencyMs': [488]}, 'model_name': 'anthropic.claude-3-5-sonnet-20240620-v1:0'}, id='run-d09ed928-146a-4336-b1fd-b63c9e623494-0', usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})"
]
},
"execution_count": 5,
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
@@ -150,7 +159,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 3,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
@@ -158,9 +167,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Voici la traduction en français :\n",
"\n",
"J'aime la programmation.\n"
"J'adore la programmation.\n"
]
}
],
@@ -170,7 +177,146 @@
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"id": "4da16f3e-e80b-48c0-8036-c1cc5f7c8c05",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"Note that `ChatBedrockConverse` emits content blocks while streaming:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "605e04fa-1a76-47ac-8c92-fe128659663e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=[] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[{'type': 'text', 'text': 'J', 'index': 0}] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[{'type': 'text', 'text': \"'adore la\", 'index': 0}] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[{'type': 'text', 'text': ' programmation.', 'index': 0}] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[{'index': 0}] additional_kwargs={} response_metadata={} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[] additional_kwargs={} response_metadata={'stopReason': 'end_turn'} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd'\n",
"content=[] additional_kwargs={} response_metadata={'metrics': {'latencyMs': 600}, 'model_name': 'anthropic.claude-3-5-sonnet-20240620-v1:0'} id='run-d0e0836e-7146-4c3d-97c7-ad23dac6febd' usage_metadata={'input_tokens': 29, 'output_tokens': 11, 'total_tokens': 40, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}}\n"
]
}
],
"source": [
"for chunk in llm.stream(messages):\n",
" print(chunk)"
]
},
{
"cell_type": "markdown",
"id": "0ef05abb-9c04-4dc3-995e-f857779644d5",
"metadata": {},
"source": [
"You can filter to text using the [.text()](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.AIMessage.html#langchain_core.messages.ai.AIMessage.text) method on the output:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "2a4e743f-ea7d-4e5a-9b12-f9992362de8b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"|J|'adore la| programmation.||||"
]
}
],
"source": [
"for chunk in llm.stream(messages):\n",
" print(chunk.text(), end=\"|\")"
]
},
{
"cell_type": "markdown",
"id": "a77519e5-897d-41a0-a9bb-55300fa79efc",
"metadata": {},
"source": [
"## Prompt caching\n",
"\n",
"Bedrock supports [caching](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html) of elements of your prompts, including messages and tools. This allows you to re-use large documents, instructions, [few-shot documents](/docs/concepts/few_shot_prompting/), and other data to reduce latency and costs.\n",
"\n",
":::note\n",
"\n",
"Not all models support prompt caching. See supported models [here](https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html#prompt-caching-models).\n",
"\n",
":::\n",
"\n",
"To enable caching on an element of a prompt, mark its associated content block using the `cachePoint` key. See example below:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d5f63d01-85e8-4797-a2be-0fea747a6049",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"First invocation:\n",
"{'cache_creation': 1528, 'cache_read': 0}\n",
"\n",
"Second:\n",
"{'cache_creation': 0, 'cache_read': 1528}\n"
]
}
],
"source": [
"import requests\n",
"from langchain_aws import ChatBedrockConverse\n",
"\n",
"llm = ChatBedrockConverse(model=\"us.anthropic.claude-3-7-sonnet-20250219-v1:0\")\n",
"\n",
"# Pull LangChain readme\n",
"get_response = requests.get(\n",
" \"https://raw.githubusercontent.com/langchain-ai/langchain/master/README.md\"\n",
")\n",
"readme = get_response.text\n",
"\n",
"messages = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": [\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": \"What's LangChain, according to its README?\",\n",
" },\n",
" {\n",
" \"type\": \"text\",\n",
" \"text\": f\"{readme}\",\n",
" },\n",
" {\n",
" \"cachePoint\": {\"type\": \"default\"},\n",
" },\n",
" ],\n",
" },\n",
"]\n",
"\n",
"response_1 = llm.invoke(messages)\n",
"response_2 = llm.invoke(messages)\n",
"\n",
"usage_1 = response_1.usage_metadata[\"input_token_details\"]\n",
"usage_2 = response_2.usage_metadata[\"input_token_details\"]\n",
"\n",
"print(f\"First invocation:\\n{usage_1}\")\n",
"print(f\"\\nSecond:\\n{usage_2}\")"
]
},
{
"cell_type": "markdown",
"id": "1b550667-af5b-4557-b84f-c8f865dad6cb",
"metadata": {},
"source": [
"## Chaining\n",
@@ -181,13 +327,13 @@
{
"cell_type": "code",
"execution_count": 7,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"id": "6033f3fa-0e96-46e3-abb3-1530928fea88",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={'usage': {'prompt_tokens': 23, 'completion_tokens': 11, 'total_tokens': 34}, 'stop_reason': 'end_turn', 'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0'}, response_metadata={'usage': {'prompt_tokens': 23, 'completion_tokens': 11, 'total_tokens': 34}, 'stop_reason': 'end_turn', 'model_id': 'anthropic.claude-3-sonnet-20240229-v1:0'}, id='run-5ad005ce-9f31-4670-baa0-9373d418698a-0', usage_metadata={'input_tokens': 23, 'output_tokens': 11, 'total_tokens': 34})"
"AIMessage(content=\"Here's the German translation:\\n\\nIch liebe das Programmieren.\", additional_kwargs={}, response_metadata={'ResponseMetadata': {'RequestId': '1de3d7c0-8062-4f7e-bb8a-8f725b97a8b0', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Wed, 16 Apr 2025 19:32:51 GMT', 'content-type': 'application/json', 'content-length': '243', 'connection': 'keep-alive', 'x-amzn-requestid': '1de3d7c0-8062-4f7e-bb8a-8f725b97a8b0'}, 'RetryAttempts': 0}, 'stopReason': 'end_turn', 'metrics': {'latencyMs': [719]}, 'model_name': 'anthropic.claude-3-5-sonnet-20240620-v1:0'}, id='run-7021fcd7-704e-496b-a92e-210139614402-0', usage_metadata={'input_tokens': 23, 'output_tokens': 19, 'total_tokens': 42, 'input_token_details': {'cache_creation': 0, 'cache_read': 0}})"
]
},
"execution_count": 7,
@@ -218,131 +364,6 @@
")"
]
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
"## Bedrock Converse API\n",
"\n",
"AWS has recently released the Bedrock Converse API which provides a unified conversational interface for Bedrock models. This API does not yet support custom models. You can see a list of all [models that are supported here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html). To improve reliability the ChatBedrock integration will switch to using the Bedrock Converse API as soon as it has feature parity with the existing Bedrock API. Until then a separate [ChatBedrockConverse](https://python.langchain.com/api_reference/aws/chat_models/langchain_aws.chat_models.bedrock_converse.ChatBedrockConverse.html) integration has been released.\n",
"\n",
"We recommend using `ChatBedrockConverse` for users who do not need to use custom models.\n",
"\n",
"You can use it like so:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "ae728e59-94d4-40cf-9d24-25ad8723fc59",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Voici la traduction en français :\\n\\nJ'aime la programmation.\", response_metadata={'ResponseMetadata': {'RequestId': '4fcbfbe9-f916-4df2-b0bd-ea1147b550aa', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'Wed, 21 Aug 2024 17:23:49 GMT', 'content-type': 'application/json', 'content-length': '243', 'connection': 'keep-alive', 'x-amzn-requestid': '4fcbfbe9-f916-4df2-b0bd-ea1147b550aa'}, 'RetryAttempts': 0}, 'stopReason': 'end_turn', 'metrics': {'latencyMs': 672}}, id='run-77ee9810-e32b-45dc-9ccb-6692253b1f45-0', usage_metadata={'input_tokens': 29, 'output_tokens': 21, 'total_tokens': 50})"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_aws import ChatBedrockConverse\n",
"\n",
"llm = ChatBedrockConverse(\n",
" model=\"anthropic.claude-3-sonnet-20240229-v1:0\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" # other params...\n",
")\n",
"\n",
"llm.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "4da16f3e-e80b-48c0-8036-c1cc5f7c8c05",
"metadata": {},
"source": [
"### Streaming\n",
"\n",
"Note that `ChatBedrockConverse` emits content blocks while streaming:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "7794b32e-d8de-4973-bf0f-39807dc745f0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=[] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'Vo', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'ici', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' la', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' tra', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'duction', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' en', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' français', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' :', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': '\\n\\nJ', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': \"'\", 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'a', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'ime', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' la', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': ' programm', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': 'ation', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'type': 'text', 'text': '.', 'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[{'index': 0}] id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[] response_metadata={'stopReason': 'end_turn'} id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8'\n",
"content=[] response_metadata={'metrics': {'latencyMs': 713}} id='run-2c92c5af-d771-4cc2-98d9-c11bbd30a1d8' usage_metadata={'input_tokens': 29, 'output_tokens': 21, 'total_tokens': 50}\n"
]
}
],
"source": [
"for chunk in llm.stream(messages):\n",
" print(chunk)"
]
},
{
"cell_type": "markdown",
"id": "0ef05abb-9c04-4dc3-995e-f857779644d5",
"metadata": {},
"source": [
"An output parser can be used to filter to text, if desired:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2a4e743f-ea7d-4e5a-9b12-f9992362de8b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"|Vo|ici| la| tra|duction| en| français| :|\n",
"\n",
"J|'|a|ime| la| programm|ation|.||||"
]
}
],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"chain = llm | StrOutputParser()\n",
"\n",
"for chunk in chain.stream(messages):\n",
" print(chunk, end=\"|\")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",

View File

@@ -1,424 +1,422 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Cerebras\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatCerebras\n",
"\n",
"This notebook provides a quick overview for getting started with Cerebras [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatCerebras features and configurations head to the [API reference](https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#).\n",
"\n",
"At Cerebras, we've developed the world's largest and fastest AI processor, the Wafer-Scale Engine-3 (WSE-3). The Cerebras CS-3 system, powered by the WSE-3, represents a new class of AI supercomputer that sets the standard for generative AI training and inference with unparalleled performance and scalability.\n",
"\n",
"With Cerebras as your inference provider, you can:\n",
"- Achieve unprecedented speed for AI inference workloads\n",
"- Build commercially with high throughput\n",
"- Effortlessly scale your AI workloads with our seamless clustering technology\n",
"\n",
"Our CS-3 systems can be quickly and easily clustered to create the largest AI supercomputers in the world, making it simple to place and run the largest models. Leading corporations, research institutions, and governments are already using Cerebras solutions to develop proprietary models and train popular open-source models.\n",
"\n",
"Want to experience the power of Cerebras? Check out our [website](https://cerebras.ai) for more resources and explore options for accessing our technology through the Cerebras Cloud or on-premise deployments!\n",
"\n",
"For more information about Cerebras Cloud, visit [cloud.cerebras.ai](https://cloud.cerebras.ai/). Our API reference is available at [inference-docs.cerebras.ai](https://inference-docs.cerebras.ai/).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/cerebras) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatCerebras](https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#) | [langchain-cerebras](https://python.langchain.com/api_reference/cerebras/index.html) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-cerebras?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-cerebras?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"```bash\n",
"pip install langchain-cerebras\n",
"```\n",
"\n",
"### Credentials\n",
"\n",
"Get an API Key from [cloud.cerebras.ai](https://cloud.cerebras.ai/) and add it to your environment variables:\n",
"```\n",
"export CEREBRAS_API_KEY=\"your-api-key-here\"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "ce19c2d6",
"metadata": {},
"outputs": [
"cells": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Enter your Cerebras API key: ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"CEREBRAS_API_KEY\" not in os.environ:\n",
" os.environ[\"CEREBRAS_API_KEY\"] = getpass.getpass(\"Enter your Cerebras API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Cerebras integration lives in the `langchain-cerebras` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-cerebras"
]
},
{
"cell_type": "markdown",
"id": "ea69675d",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "21155898",
"metadata": {},
"outputs": [],
"source": [
"from langchain_cerebras import ChatCerebras\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Je adore le programmation.', response_metadata={'token_usage': {'completion_tokens': 7, 'prompt_tokens': 35, 'total_tokens': 42}, 'model_name': 'llama3-8b-8192', 'system_fingerprint': 'fp_be27ec77ff', 'finish_reason': 'stop'}, id='run-e5d66faf-019c-4ac6-9265-71093b13202d-0', usage_metadata={'input_tokens': 35, 'output_tokens': 7, 'total_tokens': 42})"
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Cerebras\n",
"---"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
},
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren!\\n\\n(Literally: I love programming!)', response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 30, 'total_tokens': 44}, 'model_name': 'llama3-8b-8192', 'system_fingerprint': 'fp_be27ec77ff', 'finish_reason': 'stop'}, id='run-e1d2ebb8-76d1-471b-9368-3b68d431f16a-0', usage_metadata={'input_tokens': 30, 'output_tokens': 14, 'total_tokens': 44})"
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatCerebras\n",
"\n",
"This notebook provides a quick overview for getting started with Cerebras [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatCerebras features and configurations head to the [API reference](https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#).\n",
"\n",
"At Cerebras, we've developed the world's largest and fastest AI processor, the Wafer-Scale Engine-3 (WSE-3). The Cerebras CS-3 system, powered by the WSE-3, represents a new class of AI supercomputer that sets the standard for generative AI training and inference with unparalleled performance and scalability.\n",
"\n",
"With Cerebras as your inference provider, you can:\n",
"- Achieve unprecedented speed for AI inference workloads\n",
"- Build commercially with high throughput\n",
"- Effortlessly scale your AI workloads with our seamless clustering technology\n",
"\n",
"Our CS-3 systems can be quickly and easily clustered to create the largest AI supercomputers in the world, making it simple to place and run the largest models. Leading corporations, research institutions, and governments are already using Cerebras solutions to develop proprietary models and train popular open-source models.\n",
"\n",
"Want to experience the power of Cerebras? Check out our [website](https://cerebras.ai) for more resources and explore options for accessing our technology through the Cerebras Cloud or on-premise deployments!\n",
"\n",
"For more information about Cerebras Cloud, visit [cloud.cerebras.ai](https://cloud.cerebras.ai/). Our API reference is available at [inference-docs.cerebras.ai](https://inference-docs.cerebras.ai/).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/cerebras) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatCerebras](https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#) | [langchain-cerebras](https://python.langchain.com/api_reference/cerebras/index.html) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-cerebras?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-cerebras?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"```bash\n",
"pip install langchain-cerebras\n",
"```\n",
"\n",
"### Credentials\n",
"\n",
"Get an API Key from [cloud.cerebras.ai](https://cloud.cerebras.ai/) and add it to your environment variables:\n",
"```\n",
"export CEREBRAS_API_KEY=\"your-api-key-here\"\n",
"```"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_cerebras import ChatCerebras\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "0ec73a0e",
"metadata": {},
"source": [
"## Streaming"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "46fd21a7",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"OH BOY! Let me tell you all about LIONS!\n",
"\n",
"Lions are the kings of the jungle! They're really big and have beautiful, fluffy manes around their necks. The mane is like a big, golden crown!\n",
"\n",
"Lions live in groups called prides. A pride is like a big family, and the lionesses (that's what we call the female lions) take care of the babies. The lionesses are like the mommies, and they teach the babies how to hunt and play.\n",
"\n",
"Lions are very good at hunting. They work together to catch their food, like zebras and antelopes. They're super fast and can run really, really fast!\n",
"\n",
"But lions are also very sleepy. They like to take long naps in the sun, and they can sleep for up to 20 hours a day! Can you imagine sleeping that much?\n",
"\n",
"Lions are also very loud. They roar really loudly to talk to each other. It's like they're saying, \"ROAR! I'm the king of the jungle!\"\n",
"\n",
"And guess what? Lions are very social. They like to play and cuddle with each other. They're like big, furry teddy bears!\n",
"\n",
"So, that's lions! Aren't they just the coolest?"
]
}
],
"source": [
"from langchain_cerebras import ChatCerebras\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")\n",
"\n",
"system = \"You are an expert on animals who must answer questions in a manner that a 5 year old can understand.\"\n",
"human = \"I want to learn more about this animal: {animal}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chain = prompt | llm\n",
"\n",
"for chunk in chain.stream({\"animal\": \"Lion\"}):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "f67b6132",
"metadata": {},
"source": [
"## Async"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "a3a45baf",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ice', response_metadata={'token_usage': {'completion_tokens': 2, 'prompt_tokens': 36, 'total_tokens': 38}, 'model_name': 'llama3-8b-8192', 'system_fingerprint': 'fp_be27ec77ff', 'finish_reason': 'stop'}, id='run-7434bdde-1bec-44cf-827b-8d978071dfe8-0', usage_metadata={'input_tokens': 36, 'output_tokens': 2, 'total_tokens': 38})"
"cell_type": "code",
"execution_count": 8,
"id": "ce19c2d6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Enter your Cerebras API key: ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"CEREBRAS_API_KEY\" not in os.environ:\n",
" os.environ[\"CEREBRAS_API_KEY\"] = getpass.getpass(\"Enter your Cerebras API key: \")"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_cerebras import ChatCerebras\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"human\",\n",
" \"Let's play a game of opposites. What's the opposite of {topic}? Just give me the answer with no extra input.\",\n",
" )\n",
" ]\n",
")\n",
"chain = prompt | llm\n",
"await chain.ainvoke({\"topic\": \"fire\"})"
]
},
{
"cell_type": "markdown",
"id": "4f9d9945",
"metadata": {},
"source": [
"## Async Streaming"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "c7448e0f",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"In the distant reaches of the cosmos, there existed a peculiar phenomenon known as the \"Eclipse of Eternity,\" a swirling vortex of darkness that had been shrouded in mystery for eons. It was said that this blackhole, born from the cataclysmic collision of two ancient stars, had been slowly devouring the fabric of space-time itself, warping the very essence of reality. As the celestial bodies of the galaxy danced around it, they began to notice a strange, almost imperceptible distortion in the fabric of space, as if the blackhole's gravitational pull was exerting an influence on the very course of events itself.\n",
"\n",
"As the centuries passed, astronomers from across the galaxy became increasingly fascinated by the Eclipse of Eternity, pouring over ancient texts and scouring the cosmos for any hint of its secrets. One such scholar, a brilliant and reclusive astrophysicist named Dr. Elara Vex, became obsessed with unraveling the mysteries of the blackhole. She spent years pouring over ancient texts, deciphering cryptic messages and hidden codes that hinted at the existence of a long-lost civilization that had once thrived in the heart of the blackhole itself. According to legend, this ancient civilization had possessed knowledge of the cosmos that was beyond human comprehension, and had used their mastery of the universe to create the Eclipse of Eternity as a gateway to other dimensions.\n",
"\n",
"As Dr. Vex delved deeper into her research, she began to experience strange and vivid dreams, visions that seemed to transport her to the very heart of the blackhole itself. In these dreams, she saw ancient beings, their faces twisted in agony as they were consumed by the void. She saw stars and galaxies, their light warped and distorted by the blackhole's gravitational pull. And she saw the Eclipse of Eternity itself, its swirling vortex of darkness pulsing with an otherworldly energy that seemed to be calling to her. As the dreams grew more vivid and more frequent, Dr. Vex became convinced that she was being drawn into the heart of the blackhole, and that the secrets of the universe lay waiting for her on the other side."
]
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": 9,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Cerebras integration lives in the `langchain-cerebras` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-cerebras"
]
},
{
"cell_type": "markdown",
"id": "ea69675d",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "21155898",
"metadata": {},
"outputs": [],
"source": [
"from langchain_cerebras import ChatCerebras\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Je adore le programmation.', response_metadata={'token_usage': {'completion_tokens': 7, 'prompt_tokens': 35, 'total_tokens': 42}, 'model_name': 'llama3-8b-8192', 'system_fingerprint': 'fp_be27ec77ff', 'finish_reason': 'stop'}, id='run-e5d66faf-019c-4ac6-9265-71093b13202d-0', usage_metadata={'input_tokens': 35, 'output_tokens': 7, 'total_tokens': 42})"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren!\\n\\n(Literally: I love programming!)', response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 30, 'total_tokens': 44}, 'model_name': 'llama3-8b-8192', 'system_fingerprint': 'fp_be27ec77ff', 'finish_reason': 'stop'}, id='run-e1d2ebb8-76d1-471b-9368-3b68d431f16a-0', usage_metadata={'input_tokens': 30, 'output_tokens': 14, 'total_tokens': 44})"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_cerebras import ChatCerebras\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "0ec73a0e",
"metadata": {},
"source": [
"## Streaming"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "46fd21a7",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OH BOY! Let me tell you all about LIONS!\n",
"\n",
"Lions are the kings of the jungle! They're really big and have beautiful, fluffy manes around their necks. The mane is like a big, golden crown!\n",
"\n",
"Lions live in groups called prides. A pride is like a big family, and the lionesses (that's what we call the female lions) take care of the babies. The lionesses are like the mommies, and they teach the babies how to hunt and play.\n",
"\n",
"Lions are very good at hunting. They work together to catch their food, like zebras and antelopes. They're super fast and can run really, really fast!\n",
"\n",
"But lions are also very sleepy. They like to take long naps in the sun, and they can sleep for up to 20 hours a day! Can you imagine sleeping that much?\n",
"\n",
"Lions are also very loud. They roar really loudly to talk to each other. It's like they're saying, \"ROAR! I'm the king of the jungle!\"\n",
"\n",
"And guess what? Lions are very social. They like to play and cuddle with each other. They're like big, furry teddy bears!\n",
"\n",
"So, that's lions! Aren't they just the coolest?"
]
}
],
"source": [
"from langchain_cerebras import ChatCerebras\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")\n",
"\n",
"system = \"You are an expert on animals who must answer questions in a manner that a 5 year old can understand.\"\n",
"human = \"I want to learn more about this animal: {animal}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chain = prompt | llm\n",
"\n",
"for chunk in chain.stream({\"animal\": \"Lion\"}):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "f67b6132",
"metadata": {},
"source": [
"## Async"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "a3a45baf",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ice', response_metadata={'token_usage': {'completion_tokens': 2, 'prompt_tokens': 36, 'total_tokens': 38}, 'model_name': 'llama3-8b-8192', 'system_fingerprint': 'fp_be27ec77ff', 'finish_reason': 'stop'}, id='run-7434bdde-1bec-44cf-827b-8d978071dfe8-0', usage_metadata={'input_tokens': 36, 'output_tokens': 2, 'total_tokens': 38})"
]
},
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_cerebras import ChatCerebras\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"human\",\n",
" \"Let's play a game of opposites. What's the opposite of {topic}? Just give me the answer with no extra input.\",\n",
" )\n",
" ]\n",
")\n",
"chain = prompt | llm\n",
"await chain.ainvoke({\"topic\": \"fire\"})"
]
},
{
"cell_type": "markdown",
"id": "4f9d9945",
"metadata": {},
"source": [
"## Async Streaming"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "c7448e0f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"In the distant reaches of the cosmos, there existed a peculiar phenomenon known as the \"Eclipse of Eternity,\" a swirling vortex of darkness that had been shrouded in mystery for eons. It was said that this blackhole, born from the cataclysmic collision of two ancient stars, had been slowly devouring the fabric of space-time itself, warping the very essence of reality. As the celestial bodies of the galaxy danced around it, they began to notice a strange, almost imperceptible distortion in the fabric of space, as if the blackhole's gravitational pull was exerting an influence on the very course of events itself.\n",
"\n",
"As the centuries passed, astronomers from across the galaxy became increasingly fascinated by the Eclipse of Eternity, pouring over ancient texts and scouring the cosmos for any hint of its secrets. One such scholar, a brilliant and reclusive astrophysicist named Dr. Elara Vex, became obsessed with unraveling the mysteries of the blackhole. She spent years pouring over ancient texts, deciphering cryptic messages and hidden codes that hinted at the existence of a long-lost civilization that had once thrived in the heart of the blackhole itself. According to legend, this ancient civilization had possessed knowledge of the cosmos that was beyond human comprehension, and had used their mastery of the universe to create the Eclipse of Eternity as a gateway to other dimensions.\n",
"\n",
"As Dr. Vex delved deeper into her research, she began to experience strange and vivid dreams, visions that seemed to transport her to the very heart of the blackhole itself. In these dreams, she saw ancient beings, their faces twisted in agony as they were consumed by the void. She saw stars and galaxies, their light warped and distorted by the blackhole's gravitational pull. And she saw the Eclipse of Eternity itself, its swirling vortex of darkness pulsing with an otherworldly energy that seemed to be calling to her. As the dreams grew more vivid and more frequent, Dr. Vex became convinced that she was being drawn into the heart of the blackhole, and that the secrets of the universe lay waiting for her on the other side."
]
}
],
"source": [
"from langchain_cerebras import ChatCerebras\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"human\",\n",
" \"Write a long convoluted story about {subject}. I want {num_paragraphs} paragraphs.\",\n",
" )\n",
" ]\n",
")\n",
"chain = prompt | llm\n",
"\n",
"async for chunk in chain.astream({\"num_paragraphs\": 3, \"subject\": \"blackholes\"}):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatCerebras features and configurations head to the API reference: https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
}
],
"source": [
"from langchain_cerebras import ChatCerebras\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"llm = ChatCerebras(\n",
" model=\"llama-3.3-70b\",\n",
" # other params...\n",
")\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"human\",\n",
" \"Write a long convoluted story about {subject}. I want {num_paragraphs} paragraphs.\",\n",
" )\n",
" ]\n",
")\n",
"chain = prompt | llm\n",
"\n",
"async for chunk in chain.astream({\"num_paragraphs\": 3, \"subject\": \"blackholes\"}):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatCerebras features and configurations head to the API reference: https://python.langchain.com/api_reference/cerebras/chat_models/langchain_cerebras.chat_models.ChatCerebras.html#"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -2,50 +2,74 @@
"cells": [
{
"cell_type": "raw",
"id": "30373ae2-f326-4e96-a1f7-062f57396886",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Cloudflare Workers AI\n",
"sidebar_label: CloudflareWorkersAI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f679592d",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatCloudflareWorkersAI\n",
"\n",
"This will help you getting started with CloudflareWorkersAI [chat models](/docs/concepts/chat_models). For detailed documentation of all available Cloudflare WorkersAI models head to the [API reference](https://developers.cloudflare.com/workers-ai/).\n",
"\n",
"This will help you getting started with CloudflareWorkersAI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatCloudflareWorkersAI features and configurations head to the [API reference](https://python.langchain.com/docs/integrations/chat/cloudflare_workersai/).\n",
"\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/cloudflare_workersai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| ChatCloudflareWorkersAI | langchain-community| ❌ | ❌ | ✅ | ❌ | ❌ |\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/cloudflare) | Package downloads | Package latest |\n",
"| :--- | :--- |:-----:|:------------:|:------------------------------------------------------------------------:| :---: | :---: |\n",
"| [ChatCloudflareWorkersAI](https://python.langchain.com/docs/integrations/chat/cloudflare_workersai/) | [langchain-cloudflare](https://pypi.org/project/langchain-cloudflare/) | ✅ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-cloudflare?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-cloudflare?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | \n",
"|:-----------------------------------------:|:----------------------------------------------------:|:---------:|:----------------------------------------------:|:-----------:|:-----------:|:-----------------------------------------------------:|:------------:|:------------------------------------------------------:|:----------------------------------:|\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"- To access Cloudflare Workers AI models you'll need to create a Cloudflare account, get an account number and API key, and install the `langchain-community` package.\n",
"\n",
"To access CloudflareWorkersAI models you'll need to create a/an CloudflareWorkersAI account, get an API key, and install the `langchain-cloudflare` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"\n",
"Head to [this document](https://developers.cloudflare.com/workers-ai/get-started/rest-api/) to sign up to Cloudflare Workers AI and generate an API key."
"Head to https://www.cloudflare.com/developer-platform/products/workers-ai/ to sign up to CloudflareWorkersAI and generate an API key. Once you've done this set the CF_API_KEY environment variable and the CF_ACCOUNT_ID environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {
"is_executing": true
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"CF_API_KEY\"):\n",
" os.environ[\"CF_API_KEY\"] = getpass.getpass(\n",
" \"Enter your CloudflareWorkersAI API key: \"\n",
" )\n",
"\n",
"if not os.getenv(\"CF_ACCOUNT_ID\"):\n",
" os.environ[\"CF_ACCOUNT_ID\"] = getpass.getpass(\n",
" \"Enter your CloudflareWorkersAI account ID: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "4a524cff",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
@@ -53,8 +77,8 @@
},
{
"cell_type": "code",
"execution_count": 3,
"id": "71b53c25",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
@@ -64,80 +88,81 @@
},
{
"cell_type": "markdown",
"id": "777a8526",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain ChatCloudflareWorkersAI integration lives in the `langchain-community` package:"
"The LangChain CloudflareWorkersAI integration lives in the `langchain-cloudflare` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "54990998",
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-community"
"%pip install -qU langchain-cloudflare"
]
},
{
"cell_type": "markdown",
"id": "629ba46f",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
"Now we can instantiate our model object and generate chat completions:\n",
"\n",
"- Update model instantiation with relevant params."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ec13c2d9",
"metadata": {},
"execution_count": 35,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-07T17:48:31.193773Z",
"start_time": "2025-04-07T17:48:31.179196Z"
}
},
"outputs": [],
"source": [
"from langchain_community.chat_models.cloudflare_workersai import ChatCloudflareWorkersAI\n",
"from langchain_cloudflare.chat_models import ChatCloudflareWorkersAI\n",
"\n",
"llm = ChatCloudflareWorkersAI(\n",
" account_id=\"my_account_id\",\n",
" api_token=\"my_api_token\",\n",
" model=\"@hf/nousresearch/hermes-2-pro-mistral-7b\",\n",
" model=\"@cf/meta/llama-3.3-70b-instruct-fp8-fast\",\n",
" temperature=0,\n",
" max_tokens=1024,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "119b6732",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
"## Invocation\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2438a906",
"execution_count": 19,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-11-07 15:55:14 - INFO - Sending prompt to Cloudflare Workers AI: {'prompt': 'role: system, content: You are a helpful assistant that translates English to French. Translate the user sentence.\\nrole: user, content: I love programming.', 'tools': None}\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content='{\\'result\\': {\\'response\\': \\'Je suis un assistant virtuel qui peut traduire l\\\\\\'anglais vers le français. La phrase que vous avez dite est : \"J\\\\\\'aime programmer.\" En français, cela se traduit par : \"J\\\\\\'adore programmer.\"\\'}, \\'success\\': True, \\'errors\\': [], \\'messages\\': []}', additional_kwargs={}, response_metadata={}, id='run-838fd398-8594-4ca5-9055-03c72993caf6-0')"
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={'token_usage': {'prompt_tokens': 37, 'completion_tokens': 9, 'total_tokens': 46}, 'model_name': '@cf/meta/llama-3.3-70b-instruct-fp8-fast'}, id='run-995d1970-b6be-49f3-99ae-af4cdba02304-0', usage_metadata={'input_tokens': 37, 'output_tokens': 9, 'total_tokens': 46})"
]
},
"execution_count": 8,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
@@ -156,15 +181,15 @@
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1b4911bd",
"execution_count": 20,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'result': {'response': 'Je suis un assistant virtuel qui peut traduire l\\'anglais vers le français. La phrase que vous avez dite est : \"J\\'aime programmer.\" En français, cela se traduit par : \"J\\'adore programmer.\"'}, 'success': True, 'errors': [], 'messages': []}\n"
"J'adore la programmation.\n"
]
}
],
@@ -174,34 +199,27 @@
},
{
"cell_type": "markdown",
"id": "111aa5d4",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b2a14282",
"execution_count": 21,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-11-07 15:55:24 - INFO - Sending prompt to Cloudflare Workers AI: {'prompt': 'role: system, content: You are a helpful assistant that translates English to German.\\nrole: user, content: I love programming.', 'tools': None}\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\"{'result': {'response': 'role: system, content: Das ist sehr nett zu hören! Programmieren lieben, ist eine interessante und anspruchsvolle Hobby- oder Berufsausrichtung. Wenn Sie englische Texte ins Deutsche übersetzen möchten, kann ich Ihnen helfen. Geben Sie bitte den englischen Satz oder die Übersetzung an, die Sie benötigen.'}, 'success': True, 'errors': [], 'messages': []}\", additional_kwargs={}, response_metadata={}, id='run-0d3be9a6-3d74-4dde-b49a-4479d6af00ef-0')"
"AIMessage(content='Ich liebe das Programmieren.', additional_kwargs={}, response_metadata={'token_usage': {'prompt_tokens': 32, 'completion_tokens': 7, 'total_tokens': 39}, 'model_name': '@cf/meta/llama-3.3-70b-instruct-fp8-fast'}, id='run-d1b677bc-194e-4473-90f1-aa65e8e46d50-0', usage_metadata={'input_tokens': 32, 'output_tokens': 7, 'total_tokens': 39})"
]
},
"execution_count": 10,
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
@@ -209,7 +227,7 @@
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
@@ -231,12 +249,123 @@
},
{
"cell_type": "markdown",
"id": "e1f311bd",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
"## Structured Outputs"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "91cae406-14d7-46c9-b942-2d1476588423",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'setup': 'Why did the cat join a band?',\n",
" 'punchline': 'Because it wanted to be the purr-cussionist',\n",
" 'rating': '8'}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"json_schema = {\n",
" \"title\": \"joke\",\n",
" \"description\": \"Joke to tell user.\",\n",
" \"type\": \"object\",\n",
" \"properties\": {\n",
" \"setup\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The setup of the joke\",\n",
" },\n",
" \"punchline\": {\n",
" \"type\": \"string\",\n",
" \"description\": \"The punchline to the joke\",\n",
" },\n",
" \"rating\": {\n",
" \"type\": \"integer\",\n",
" \"description\": \"How funny the joke is, from 1 to 10\",\n",
" \"default\": None,\n",
" },\n",
" },\n",
" \"required\": [\"setup\", \"punchline\"],\n",
"}\n",
"structured_llm = llm.with_structured_output(json_schema)\n",
"\n",
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
"cell_type": "markdown",
"id": "dbfc0c43-e76b-446e-bbb1-d351640bb7be",
"metadata": {},
"source": [
"## Bind tools"
]
},
{
"cell_type": "code",
"execution_count": 36,
"id": "0765265e-4d00-4030-bf48-7e8d8c9af2ec",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'validate_user',\n",
" 'args': {'user_id': '123',\n",
" 'addresses': '[\"123 Fake St in Boston MA\", \"234 Pretend Boulevard in Houston TX\"]'},\n",
" 'id': '31ec7d6a-9ce5-471b-be64-8ea0492d1387',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 36,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import List\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def validate_user(user_id: int, addresses: List[str]) -> bool:\n",
" \"\"\"Validate user using historical addresses.\n",
"\n",
" Args:\n",
" user_id (int): the user ID.\n",
" addresses (List[str]): Previous addresses as a list of strings.\n",
" \"\"\"\n",
" return True\n",
"\n",
"\n",
"llm_with_tools = llm.bind_tools([validate_user])\n",
"\n",
"result = llm_with_tools.invoke(\n",
" \"Could you validate user 123? They previously lived at \"\n",
" \"123 Fake St in Boston MA and 234 Pretend Boulevard in \"\n",
" \"Houston TX.\"\n",
")\n",
"result.tool_calls"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation on `ChatCloudflareWorkersAI` features and configuration options, please refer to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.cloudflare_workersai.html)."
"https://developers.cloudflare.com/workers-ai/\n",
"https://developers.cloudflare.com/agents/"
]
}
],
@@ -256,7 +385,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.11.7"
}
},
"nbformat": 4,

View File

@@ -1,352 +1,350 @@
{
"cells": [
{
"cell_type": "raw",
"id": "53fbf15f",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Cohere\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"metadata": {},
"source": [
"# Cohere\n",
"\n",
"This notebook covers how to get started with [Cohere chat models](https://cohere.com/chat).\n",
"\n",
"Head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.cohere.ChatCohere.html) for detailed documentation of all attributes and methods."
]
},
{
"cell_type": "markdown",
"id": "3607d67e-e56c-4102-bbba-df2edc0e109e",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"The integration lives in the `langchain-cohere` package. We can install these with:\n",
"\n",
"```bash\n",
"pip install -U langchain-cohere\n",
"```\n",
"\n",
"We'll also need to get a [Cohere API key](https://cohere.com/) and set the `COHERE_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "2108b517-1e8d-473d-92fa-4f930e8072a7",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"COHERE_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "cf690fbb",
"metadata": {},
"source": [
"It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "7f11de02",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "4c26754b-b3c9-4d93-8f36-43049bd943bf",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"ChatCohere supports all [ChatModel](/docs/how_to#chat-models) functionality:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_cohere import ChatCohere\n",
"from langchain_core.messages import HumanMessage"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat = ChatCohere()"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"metadata": {
"tags": []
},
"outputs": [
"cells": [
{
"data": {
"text/plain": [
"AIMessage(content='4 && 5 \\n6 || 7 \\n\\nWould you like to play a game of odds and evens?', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, id='run-3475e0c8-c89b-4937-9300-e07d652455e1-0')"
"cell_type": "raw",
"id": "53fbf15f",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Cohere\n",
"---"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [HumanMessage(content=\"1\"), HumanMessage(content=\"2 3\")]\n",
"chat.invoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"tags": []
},
"outputs": [
},
{
"data": {
"text/plain": [
"AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-1635e63e-2994-4e7f-986e-152ddfc95777-0')"
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"metadata": {},
"source": [
"# Cohere\n",
"\n",
"This notebook covers how to get started with [Cohere chat models](https://cohere.com/chat).\n",
"\n",
"Head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.cohere.ChatCohere.html) for detailed documentation of all attributes and methods."
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chat.ainvoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
"metadata": {
"tags": []
},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"4 && 5"
]
}
],
"source": [
"for chunk in chat.stream(messages):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "064288e4-f184-4496-9427-bcf148fa055e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-8d6fade2-1b39-4e31-ab23-4be622dd0027-0')]"
"cell_type": "markdown",
"id": "3607d67e-e56c-4102-bbba-df2edc0e109e",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"The integration lives in the `langchain-cohere` package. We can install these with:\n",
"\n",
"```bash\n",
"pip install -U langchain-cohere\n",
"```\n",
"\n",
"We'll also need to get a [Cohere API key](https://cohere.com/) and set the `COHERE_API_KEY` environment variable:"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat.batch([messages])"
]
},
{
"cell_type": "markdown",
"id": "f1c56460",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts/lcel)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "0851b103",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"chain = prompt | chat"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "ae950c0f-1691-47f1-b609-273033cae707",
"metadata": {},
"outputs": [
},
{
"data": {
"text/plain": [
"AIMessage(content='What color socks do bears wear?\\n\\nThey dont wear socks, they have bear feet. \\n\\nHope you laughed! If not, maybe this will help: laughter is the best medicine, and a good sense of humor is infectious!', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, id='run-ef7f9789-0d4d-43bf-a4f7-f2a0e27a5320-0')"
"cell_type": "code",
"execution_count": 11,
"id": "2108b517-1e8d-473d-92fa-4f930e8072a7",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"COHERE_API_KEY\"] = getpass.getpass()"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"topic\": \"bears\"})"
]
},
{
"cell_type": "markdown",
"id": "12db8d69",
"metadata": {},
"source": [
"## Tool calling\n",
"\n",
"Cohere supports tool calling functionalities!"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "337e24af",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import (\n",
" HumanMessage,\n",
" ToolMessage,\n",
")\n",
"from langchain_core.tools import tool"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "74d292e7",
"metadata": {},
"outputs": [],
"source": [
"@tool\n",
"def magic_function(number: int) -> int:\n",
" \"\"\"Applies a magic operation to an integer\n",
" Args:\n",
" number: Number to have magic operation performed on\n",
" \"\"\"\n",
" return number + 10\n",
"\n",
"\n",
"def invoke_tools(tool_calls, messages):\n",
" for tool_call in tool_calls:\n",
" selected_tool = {\"magic_function\": magic_function}[tool_call[\"name\"].lower()]\n",
" tool_output = selected_tool.invoke(tool_call[\"args\"])\n",
" messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
" return messages\n",
"\n",
"\n",
"tools = [magic_function]"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ecafcbc6",
"metadata": {},
"outputs": [],
"source": [
"llm_with_tools = chat.bind_tools(tools=tools)\n",
"messages = [HumanMessage(content=\"What is the value of magic_function(2)?\")]"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "aa34fc39",
"metadata": {},
"outputs": [
},
{
"data": {
"text/plain": [
"AIMessage(content='The value of magic_function(2) is 12.', additional_kwargs={'documents': [{'id': 'magic_function:0:2:0', 'output': '12', 'tool_name': 'magic_function'}], 'citations': [ChatCitation(start=34, end=36, text='12', document_ids=['magic_function:0:2:0'])], 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '96a55791-0c58-4e2e-bc2a-8550e137c46d', 'token_count': {'input_tokens': 998, 'output_tokens': 59}}, response_metadata={'documents': [{'id': 'magic_function:0:2:0', 'output': '12', 'tool_name': 'magic_function'}], 'citations': [ChatCitation(start=34, end=36, text='12', document_ids=['magic_function:0:2:0'])], 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '96a55791-0c58-4e2e-bc2a-8550e137c46d', 'token_count': {'input_tokens': 998, 'output_tokens': 59}}, id='run-f318a9cf-55c8-44f4-91d1-27cf46c6a465-0')"
"cell_type": "markdown",
"id": "cf690fbb",
"metadata": {},
"source": [
"It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3c2fc2201dc80557",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "31f2af10e04dec59",
"metadata": {},
"source": [
"## Usage\n",
"\n",
"ChatCohere supports all [ChatModel](/docs/how_to#chat-models) functionality:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fa83b00a929614ad",
"metadata": {},
"outputs": [],
"source": [
"from langchain_cohere import ChatCohere\n",
"from langchain_core.messages import HumanMessage"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"chat = ChatCohere()"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='4 && 5 \\n6 || 7 \\n\\nWould you like to play a game of odds and evens?', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '2076b614-52b3-4082-a259-cc92cd3d9fea', 'token_count': {'prompt_tokens': 68, 'response_tokens': 23, 'total_tokens': 91, 'billed_tokens': 77}}, id='run-3475e0c8-c89b-4937-9300-e07d652455e1-0')"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [HumanMessage(content=\"1\"), HumanMessage(content=\"2 3\")]\n",
"chat.invoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': 'f0708a92-f874-46ee-9b93-334d616ad92e', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-1635e63e-2994-4e7f-986e-152ddfc95777-0')"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chat.ainvoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"4 && 5"
]
}
],
"source": [
"for chunk in chat.stream(messages):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "064288e4-f184-4496-9427-bcf148fa055e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[AIMessage(content='4 && 5', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6770ca86-f6c3-4ba3-a285-c4772160612f', 'token_count': {'prompt_tokens': 68, 'response_tokens': 3, 'total_tokens': 71, 'billed_tokens': 57}}, id='run-8d6fade2-1b39-4e31-ab23-4be622dd0027-0')]"
]
},
"execution_count": 18,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat.batch([messages])"
]
},
{
"cell_type": "markdown",
"id": "f1c56460",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts/lcel)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "0851b103",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"Tell me a joke about {topic}\")\n",
"chain = prompt | chat"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "ae950c0f-1691-47f1-b609-273033cae707",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='What color socks do bears wear?\\n\\nThey dont wear socks, they have bear feet. \\n\\nHope you laughed! If not, maybe this will help: laughter is the best medicine, and a good sense of humor is infectious!', additional_kwargs={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, response_metadata={'documents': None, 'citations': None, 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '6edccf44-9bc8-4139-b30e-13b368f3563c', 'token_count': {'prompt_tokens': 68, 'response_tokens': 51, 'total_tokens': 119, 'billed_tokens': 108}}, id='run-ef7f9789-0d4d-43bf-a4f7-f2a0e27a5320-0')"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"topic\": \"bears\"})"
]
},
{
"cell_type": "markdown",
"id": "12db8d69",
"metadata": {},
"source": [
"## Tool calling\n",
"\n",
"Cohere supports tool calling functionalities!"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "337e24af",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.messages import (\n",
" HumanMessage,\n",
" ToolMessage,\n",
")\n",
"from langchain_core.tools import tool"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "74d292e7",
"metadata": {},
"outputs": [],
"source": [
"@tool\n",
"def magic_function(number: int) -> int:\n",
" \"\"\"Applies a magic operation to an integer\n",
" Args:\n",
" number: Number to have magic operation performed on\n",
" \"\"\"\n",
" return number + 10\n",
"\n",
"\n",
"def invoke_tools(tool_calls, messages):\n",
" for tool_call in tool_calls:\n",
" selected_tool = {\"magic_function\": magic_function}[tool_call[\"name\"].lower()]\n",
" tool_output = selected_tool.invoke(tool_call[\"args\"])\n",
" messages.append(ToolMessage(tool_output, tool_call_id=tool_call[\"id\"]))\n",
" return messages\n",
"\n",
"\n",
"tools = [magic_function]"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ecafcbc6",
"metadata": {},
"outputs": [],
"source": [
"llm_with_tools = chat.bind_tools(tools=tools)\n",
"messages = [HumanMessage(content=\"What is the value of magic_function(2)?\")]"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "aa34fc39",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The value of magic_function(2) is 12.', additional_kwargs={'documents': [{'id': 'magic_function:0:2:0', 'output': '12', 'tool_name': 'magic_function'}], 'citations': [ChatCitation(start=34, end=36, text='12', document_ids=['magic_function:0:2:0'])], 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '96a55791-0c58-4e2e-bc2a-8550e137c46d', 'token_count': {'input_tokens': 998, 'output_tokens': 59}}, response_metadata={'documents': [{'id': 'magic_function:0:2:0', 'output': '12', 'tool_name': 'magic_function'}], 'citations': [ChatCitation(start=34, end=36, text='12', document_ids=['magic_function:0:2:0'])], 'search_results': None, 'search_queries': None, 'is_search_required': None, 'generation_id': '96a55791-0c58-4e2e-bc2a-8550e137c46d', 'token_count': {'input_tokens': 998, 'output_tokens': 59}}, id='run-f318a9cf-55c8-44f4-91d1-27cf46c6a465-0')"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"res = llm_with_tools.invoke(messages)\n",
"while res.tool_calls:\n",
" messages.append(res)\n",
" messages = invoke_tools(res.tool_calls, messages)\n",
" res = llm_with_tools.invoke(messages)\n",
"\n",
"res"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"res = llm_with_tools.invoke(messages)\n",
"while res.tool_calls:\n",
" messages.append(res)\n",
" messages = invoke_tools(res.tool_calls, messages)\n",
" res = llm_with_tools.invoke(messages)\n",
"\n",
"res"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
}
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.6"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,237 +1,235 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: DeepSeek\n",
"---"
]
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: DeepSeek\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatDeepSeek\n",
"\n",
"\n",
"This will help you getting started with DeepSeek's hosted [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatDeepSeek features and configurations head to the [API reference](https://python.langchain.com/api_reference/deepseek/chat_models/langchain_deepseek.chat_models.ChatDeepSeek.html).\n",
"\n",
":::tip\n",
"\n",
"DeepSeek's models are open source and can be run locally (e.g. in [Ollama](./ollama.ipynb)) or on other inference providers (e.g. [Fireworks](./fireworks.ipynb), [Together](./together.ipynb)) as well.\n",
"\n",
":::\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/deepseek) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatDeepSeek](https://python.langchain.com/api_reference/deepseek/chat_models/langchain_deepseek.chat_models.ChatDeepSeek.html) | [langchain-deepseek](https://python.langchain.com/api_reference/deepseek/) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-deepseek?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-deepseek?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
":::note\n",
"\n",
"DeepSeek-R1, specified via `model=\"deepseek-reasoner\"`, does not support tool calling or structured output. Those features [are supported](https://api-docs.deepseek.com/guides/function_calling) by DeepSeek-V3 (specified via `model=\"deepseek-chat\"`).\n",
"\n",
":::\n",
"\n",
"## Setup\n",
"\n",
"To access DeepSeek models you'll need to create a/an DeepSeek account, get an API key, and install the `langchain-deepseek` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [DeepSeek's API Key page](https://platform.deepseek.com/api_keys) to sign up to DeepSeek and generate an API key. Once you've done this set the `DEEPSEEK_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"DEEPSEEK_API_KEY\"):\n",
" os.environ[\"DEEPSEEK_API_KEY\"] = getpass.getpass(\"Enter your DeepSeek API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain DeepSeek integration lives in the `langchain-deepseek` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-deepseek"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_deepseek import ChatDeepSeek\n",
"\n",
"llm = ChatDeepSeek(\n",
" model=\"deepseek-chat\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg.content"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatDeepSeek features and configurations head to the [API Reference](https://python.langchain.com/api_reference/deepseek/chat_models/langchain_deepseek.chat_models.ChatDeepSeek.html)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatDeepSeek\n",
"\n",
"\n",
"This will help you getting started with DeepSeek's hosted [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatDeepSeek features and configurations head to the [API reference](https://python.langchain.com/api_reference/deepseek/chat_models/langchain_deepseek.chat_models.ChatDeepSeek.html).\n",
"\n",
":::tip\n",
"\n",
"DeepSeek's models are open source and can be run locally (e.g. in [Ollama](./ollama.ipynb)) or on other inference providers (e.g. [Fireworks](./fireworks.ipynb), [Together](./together.ipynb)) as well.\n",
"\n",
":::\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/deepseek) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatDeepSeek](https://python.langchain.com/api_reference/deepseek/chat_models/langchain_deepseek.chat_models.ChatDeepSeek.html) | [langchain-deepseek](https://python.langchain.com/api_reference/deepseek/) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-deepseek?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-deepseek?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
":::note\n",
"\n",
"DeepSeek-R1, specified via `model=\"deepseek-reasoner\"`, does not support tool calling or structured output. Those features [are supported](https://api-docs.deepseek.com/guides/function_calling) by DeepSeek-V3 (specified via `model=\"deepseek-chat\"`).\n",
"\n",
":::\n",
"\n",
"## Setup\n",
"\n",
"To access DeepSeek models you'll need to create a/an DeepSeek account, get an API key, and install the `langchain-deepseek` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [DeepSeek's API Key page](https://platform.deepseek.com/api_keys) to sign up to DeepSeek and generate an API key. Once you've done this set the `DEEPSEEK_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"DEEPSEEK_API_KEY\"):\n",
" os.environ[\"DEEPSEEK_API_KEY\"] = getpass.getpass(\"Enter your DeepSeek API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain DeepSeek integration lives in the `langchain-deepseek` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-deepseek"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_deepseek import ChatDeepSeek\n",
"\n",
"llm = ChatDeepSeek(\n",
" model=\"deepseek-chat\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg.content"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatDeepSeek features and configurations head to the [API Reference](https://python.langchain.com/api_reference/deepseek/chat_models/langchain_deepseek.chat_models.ChatDeepSeek.html)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,268 +1,266 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Fireworks\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatFireworks\n",
"\n",
"This doc help you get started with Fireworks AI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatFireworks features and configurations head to the [API reference](https://python.langchain.com/api_reference/fireworks/chat_models/langchain_fireworks.chat_models.ChatFireworks.html).\n",
"\n",
"Fireworks AI is an AI inference platform to run and customize models. For a list of all models served by Fireworks see the [Fireworks docs](https://fireworks.ai/models).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/fireworks) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatFireworks](https://python.langchain.com/api_reference/fireworks/chat_models/langchain_fireworks.chat_models.ChatFireworks.html) | [langchain-fireworks](https://python.langchain.com/api_reference/fireworks/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-fireworks?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-fireworks?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | \n",
"\n",
"## Setup\n",
"\n",
"To access Fireworks models you'll need to create a Fireworks account, get an API key, and install the `langchain-fireworks` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to (ttps://fireworks.ai/login to sign up to Fireworks and generate an API key. Once you've done this set the FIREWORKS_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"FIREWORKS_API_KEY\" not in os.environ:\n",
" os.environ[\"FIREWORKS_API_KEY\"] = getpass.getpass(\"Enter your Fireworks API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Fireworks integration lives in the `langchain-fireworks` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-fireworks"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n",
"\n",
"- TODO: Update model instantiation with relevant params."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_fireworks import ChatFireworks\n",
"\n",
"llm = ChatFireworks(\n",
" model=\"accounts/fireworks/models/llama-v3-70b-instruct\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
"cells": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'prompt_tokens': 35, 'total_tokens': 44, 'completion_tokens': 9}, 'model_name': 'accounts/fireworks/models/llama-v3-70b-instruct', 'system_fingerprint': '', 'finish_reason': 'stop', 'logprobs': None}, id='run-df28e69a-ff30-457e-a743-06eb14d01cb0-0', usage_metadata={'input_tokens': 35, 'output_tokens': 9, 'total_tokens': 44})"
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Fireworks\n",
"---"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'prompt_tokens': 30, 'total_tokens': 37, 'completion_tokens': 7}, 'model_name': 'accounts/fireworks/models/llama-v3-70b-instruct', 'system_fingerprint': '', 'finish_reason': 'stop', 'logprobs': None}, id='run-ff3f91ad-ed81-4acf-9f59-7490dc8d8f48-0', usage_metadata={'input_tokens': 30, 'output_tokens': 7, 'total_tokens': 37})"
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatFireworks\n",
"\n",
"This doc help you get started with Fireworks AI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatFireworks features and configurations head to the [API reference](https://python.langchain.com/api_reference/fireworks/chat_models/langchain_fireworks.chat_models.ChatFireworks.html).\n",
"\n",
"Fireworks AI is an AI inference platform to run and customize models. For a list of all models served by Fireworks see the [Fireworks docs](https://fireworks.ai/models).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/fireworks) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatFireworks](https://python.langchain.com/api_reference/fireworks/chat_models/langchain_fireworks.chat_models.ChatFireworks.html) | [langchain-fireworks](https://python.langchain.com/api_reference/fireworks/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-fireworks?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-fireworks?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ |\n",
"\n",
"## Setup\n",
"\n",
"To access Fireworks models you'll need to create a Fireworks account, get an API key, and install the `langchain-fireworks` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to (ttps://fireworks.ai/login to sign up to Fireworks and generate an API key. Once you've done this set the FIREWORKS_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"FIREWORKS_API_KEY\" not in os.environ:\n",
" os.environ[\"FIREWORKS_API_KEY\"] = getpass.getpass(\"Enter your Fireworks API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Fireworks integration lives in the `langchain-fireworks` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-fireworks"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n",
"\n",
"- TODO: Update model instantiation with relevant params."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_fireworks import ChatFireworks\n",
"\n",
"llm = ChatFireworks(\n",
" model=\"accounts/fireworks/models/llama-v3-70b-instruct\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'prompt_tokens': 35, 'total_tokens': 44, 'completion_tokens': 9}, 'model_name': 'accounts/fireworks/models/llama-v3-70b-instruct', 'system_fingerprint': '', 'finish_reason': 'stop', 'logprobs': None}, id='run-df28e69a-ff30-457e-a743-06eb14d01cb0-0', usage_metadata={'input_tokens': 35, 'output_tokens': 9, 'total_tokens': 44})"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'prompt_tokens': 30, 'total_tokens': 37, 'completion_tokens': 7}, 'model_name': 'accounts/fireworks/models/llama-v3-70b-instruct', 'system_fingerprint': '', 'finish_reason': 'stop', 'logprobs': None}, id='run-ff3f91ad-ed81-4acf-9f59-7490dc8d8f48-0', usage_metadata={'input_tokens': 30, 'output_tokens': 7, 'total_tokens': 37})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatFireworks features and configurations head to the API reference: https://python.langchain.com/api_reference/fireworks/chat_models/langchain_fireworks.chat_models.ChatFireworks.html"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatFireworks features and configurations head to the API reference: https://python.langchain.com/api_reference/fireworks/chat_models/langchain_fireworks.chat_models.ChatFireworks.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,354 +1,352 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Goodfire\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatGoodfire\n",
"\n",
"This will help you getting started with Goodfire [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatGoodfire features and configurations head to the [PyPI project page](https://pypi.org/project/langchain-goodfire/), or go directly to the [Goodfire SDK docs](https://docs.goodfire.ai/sdk-reference/example). All of the Goodfire-specific functionality (e.g. SAE features, variants, etc.) is available via the main `goodfire` package. This integration is a wrapper around the Goodfire SDK.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatGoodfire](https://python.langchain.com/api_reference/goodfire/chat_models/langchain_goodfire.chat_models.ChatGoodfire.html) | [langchain-goodfire](https://python.langchain.com/api_reference/goodfire/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-goodfire?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-goodfire?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access Goodfire models you'll need to create a/an Goodfire account, get an API key, and install the `langchain-goodfire` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [Goodfire Settings](https://platform.goodfire.ai/organization/settings/api-keys) to sign up to Goodfire and generate an API key. Once you've done this set the GOODFIRE_API_KEY environment variable."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"GOODFIRE_API_KEY\"):\n",
" os.environ[\"GOODFIRE_API_KEY\"] = getpass.getpass(\"Enter your Goodfire API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Goodfire integration lives in the `langchain-goodfire` package:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [
"cells": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-goodfire"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\n"
]
}
],
"source": [
"import goodfire\n",
"from langchain_goodfire import ChatGoodfire\n",
"\n",
"base_variant = goodfire.Variant(\"meta-llama/Llama-3.3-70B-Instruct\")\n",
"\n",
"llm = ChatGoodfire(\n",
" model=base_variant,\n",
" temperature=0,\n",
" max_completion_tokens=1000,\n",
" seed=42,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={}, id='run-8d43cf35-bce8-4827-8935-c64f8fb78cd0-0', usage_metadata={'input_tokens': 51, 'output_tokens': 39, 'total_tokens': 90})"
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Goodfire\n",
"---"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = await llm.ainvoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren. How can I help you with programming today?', additional_kwargs={}, response_metadata={}, id='run-03d1a585-8234-46f1-a8df-bf9143fe3309-0', usage_metadata={'input_tokens': 46, 'output_tokens': 46, 'total_tokens': 92})"
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatGoodfire\n",
"\n",
"This will help you getting started with Goodfire [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatGoodfire features and configurations head to the [PyPI project page](https://pypi.org/project/langchain-goodfire/), or go directly to the [Goodfire SDK docs](https://docs.goodfire.ai/sdk-reference/example). All of the Goodfire-specific functionality (e.g. SAE features, variants, etc.) is available via the main `goodfire` package. This integration is a wrapper around the Goodfire SDK.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatGoodfire](https://python.langchain.com/api_reference/goodfire/chat_models/langchain_goodfire.chat_models.ChatGoodfire.html) | [langchain-goodfire](https://python.langchain.com/api_reference/goodfire/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-goodfire?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-goodfire?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"To access Goodfire models you'll need to create a/an Goodfire account, get an API key, and install the `langchain-goodfire` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [Goodfire Settings](https://platform.goodfire.ai/organization/settings/api-keys) to sign up to Goodfire and generate an API key. Once you've done this set the GOODFIRE_API_KEY environment variable."
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"await chain.ainvoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
"## Goodfire-specific functionality\n",
"\n",
"To use Goodfire-specific functionality such as SAE features and variants, you can use the `goodfire` package directly."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "3aef9e0a",
"metadata": {},
"outputs": [
},
{
"data": {
"text/plain": [
"FeatureGroup([\n",
" 0: \"The assistant should adopt the persona of a pirate\",\n",
" 1: \"The assistant should roleplay as a pirate\",\n",
" 2: \"The assistant should engage with pirate-themed content or roleplay as a pirate\",\n",
" 3: \"The assistant should roleplay as a character\",\n",
" 4: \"The assistant should roleplay as a specific character\",\n",
" 5: \"The assistant should roleplay as a game character or NPC\",\n",
" 6: \"The assistant should roleplay as a human character\",\n",
" 7: \"Requests for the assistant to roleplay or pretend to be something else\",\n",
" 8: \"Requests for the assistant to roleplay or pretend to be something\",\n",
" 9: \"The assistant is being assigned a role or persona to roleplay\"\n",
"])"
"cell_type": "code",
"execution_count": 1,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"GOODFIRE_API_KEY\"):\n",
" os.environ[\"GOODFIRE_API_KEY\"] = getpass.getpass(\"Enter your Goodfire API key: \")"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"client = goodfire.Client(api_key=os.environ[\"GOODFIRE_API_KEY\"])\n",
"\n",
"pirate_features = client.features.search(\n",
" \"assistant should roleplay as a pirate\", base_variant\n",
")\n",
"pirate_features"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "52f03a00",
"metadata": {},
"outputs": [
},
{
"data": {
"text/plain": [
"AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field! Arrr! Hope that made ye laugh, matey!', additional_kwargs={}, response_metadata={}, id='run-7d8bd30f-7f80-41cb-bdb6-25c29c22a7ce-0', usage_metadata={'input_tokens': 35, 'output_tokens': 60, 'total_tokens': 95})"
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Goodfire integration lives in the `langchain-goodfire` package:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-goodfire"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\n"
]
}
],
"source": [
"import goodfire\n",
"from langchain_goodfire import ChatGoodfire\n",
"\n",
"base_variant = goodfire.Variant(\"meta-llama/Llama-3.3-70B-Instruct\")\n",
"\n",
"llm = ChatGoodfire(\n",
" model=base_variant,\n",
" temperature=0,\n",
" max_completion_tokens=1000,\n",
" seed=42,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", additional_kwargs={}, response_metadata={}, id='run-8d43cf35-bce8-4827-8935-c64f8fb78cd0-0', usage_metadata={'input_tokens': 51, 'output_tokens': 39, 'total_tokens': 90})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = await llm.ainvoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren. How can I help you with programming today?', additional_kwargs={}, response_metadata={}, id='run-03d1a585-8234-46f1-a8df-bf9143fe3309-0', usage_metadata={'input_tokens': 46, 'output_tokens': 46, 'total_tokens': 92})"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"await chain.ainvoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
"## Goodfire-specific functionality\n",
"\n",
"To use Goodfire-specific functionality such as SAE features and variants, you can use the `goodfire` package directly."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "3aef9e0a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"FeatureGroup([\n",
" 0: \"The assistant should adopt the persona of a pirate\",\n",
" 1: \"The assistant should roleplay as a pirate\",\n",
" 2: \"The assistant should engage with pirate-themed content or roleplay as a pirate\",\n",
" 3: \"The assistant should roleplay as a character\",\n",
" 4: \"The assistant should roleplay as a specific character\",\n",
" 5: \"The assistant should roleplay as a game character or NPC\",\n",
" 6: \"The assistant should roleplay as a human character\",\n",
" 7: \"Requests for the assistant to roleplay or pretend to be something else\",\n",
" 8: \"Requests for the assistant to roleplay or pretend to be something\",\n",
" 9: \"The assistant is being assigned a role or persona to roleplay\"\n",
"])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"client = goodfire.Client(api_key=os.environ[\"GOODFIRE_API_KEY\"])\n",
"\n",
"pirate_features = client.features.search(\n",
" \"assistant should roleplay as a pirate\", base_variant\n",
")\n",
"pirate_features"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "52f03a00",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field! Arrr! Hope that made ye laugh, matey!', additional_kwargs={}, response_metadata={}, id='run-7d8bd30f-7f80-41cb-bdb6-25c29c22a7ce-0', usage_metadata={'input_tokens': 35, 'output_tokens': 60, 'total_tokens': 95})"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pirate_variant = goodfire.Variant(\"meta-llama/Llama-3.3-70B-Instruct\")\n",
"\n",
"pirate_variant.set(pirate_features[0], 0.4)\n",
"pirate_variant.set(pirate_features[1], 0.3)\n",
"\n",
"await llm.ainvoke(\"Tell me a joke\", model=pirate_variant)"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatGoodfire features and configurations head to the [API reference](https://python.langchain.com/api_reference/goodfire/chat_models/langchain_goodfire.chat_models.ChatGoodfire.html)"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pirate_variant = goodfire.Variant(\"meta-llama/Llama-3.3-70B-Instruct\")\n",
"\n",
"pirate_variant.set(pirate_features[0], 0.4)\n",
"pirate_variant.set(pirate_features[1], 0.3)\n",
"\n",
"await llm.ainvoke(\"Tell me a joke\", model=pirate_variant)"
]
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.8"
}
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatGoodfire features and configurations head to the [API reference](https://python.langchain.com/api_reference/goodfire/chat_models/langchain_goodfire.chat_models.ChatGoodfire.html)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.8"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because one or more lines are too long

View File

@@ -1,269 +1,269 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Google Cloud Vertex AI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatVertexAI\n",
"\n",
"This page provides a quick overview for getting started with VertexAI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatVertexAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/google_vertexai/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html).\n",
"\n",
"ChatVertexAI exposes all foundational models available in Google Cloud, like `gemini-1.5-pro`, `gemini-1.5-flash`, etc. For a full and updated list of available models visit [VertexAI documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/overview).\n",
"\n",
":::info Google Cloud VertexAI vs Google PaLM\n",
"\n",
"The Google Cloud VertexAI integration is separate from the [Google PaLM integration](/docs/integrations/chat/google_generative_ai/). Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
"\n",
":::\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/google_vertex_ai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatVertexAI](https://python.langchain.com/api_reference/google_vertexai/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html) | [langchain-google-vertexai](https://python.langchain.com/api_reference/google_vertexai/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-google-vertexai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-google-vertexai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access VertexAI models you'll need to create a Google Cloud Platform account, set up credentials, and install the `langchain-google-vertexai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"To use the integration you must:\n",
"- Have credentials configured for your environment (gcloud, workload identity, etc...)\n",
"- Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable\n",
"\n",
"This codebase uses the `google.auth` library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.\n",
"\n",
"For more information, see: \n",
"- https://cloud.google.com/docs/authentication/application-default-credentials#GAC\n",
"- https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth\n",
"\n",
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain VertexAI integration lives in the `langchain-google-vertexai` package:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [
"cells": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-google-vertexai"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import ChatVertexAI\n",
"\n",
"llm = ChatVertexAI(\n",
" model=\"gemini-1.5-flash-001\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" max_retries=6,\n",
" stop=None,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer. \\n\", response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'usage_metadata': {'prompt_token_count': 20, 'candidates_token_count': 7, 'total_token_count': 27}}, id='run-7032733c-d05c-4f0c-a17a-6c575fdd1ae0-0', usage_metadata={'input_tokens': 20, 'output_tokens': 7, 'total_tokens': 27})"
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Google Cloud Vertex AI\n",
"---"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer. \n",
"\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren. \\n', response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'usage_metadata': {'prompt_token_count': 15, 'candidates_token_count': 8, 'total_token_count': 23}}, id='run-c71955fd-8dc1-422b-88a7-853accf4811b-0', usage_metadata={'input_tokens': 15, 'output_tokens': 8, 'total_tokens': 23})"
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatVertexAI\n",
"\n",
"This page provides a quick overview for getting started with VertexAI [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatVertexAI features and configurations head to the [API reference](https://python.langchain.com/api_reference/google_vertexai/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html).\n",
"\n",
"ChatVertexAI exposes all foundational models available in Google Cloud, like `gemini-1.5-pro`, `gemini-1.5-flash`, etc. For a full and updated list of available models visit [VertexAI documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/overview).\n",
"\n",
":::info Google Cloud VertexAI vs Google PaLM\n",
"\n",
"The Google Cloud VertexAI integration is separate from the [Google PaLM integration](/docs/integrations/chat/google_generative_ai/). Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there.\n",
"\n",
":::\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/google_vertex_ai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatVertexAI](https://python.langchain.com/api_reference/google_vertexai/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html) | [langchain-google-vertexai](https://python.langchain.com/api_reference/google_vertexai/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-google-vertexai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-google-vertexai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"To access VertexAI models you'll need to create a Google Cloud Platform account, set up credentials, and install the `langchain-google-vertexai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"To use the integration you must:\n",
"- Have credentials configured for your environment (gcloud, workload identity, etc...)\n",
"- Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable\n",
"\n",
"This codebase uses the `google.auth` library which first looks for the application credentials variable mentioned above, and then looks for system-level auth.\n",
"\n",
"For more information, see:\n",
"- https://cloud.google.com/docs/authentication/application-default-credentials#GAC\n",
"- https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth\n",
"\n",
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain VertexAI integration lives in the `langchain-google-vertexai` package:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-google-vertexai"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import ChatVertexAI\n",
"\n",
"llm = ChatVertexAI(\n",
" model=\"gemini-1.5-flash-001\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" max_retries=6,\n",
" stop=None,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer. \\n\", response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'usage_metadata': {'prompt_token_count': 20, 'candidates_token_count': 7, 'total_token_count': 27}}, id='run-7032733c-d05c-4f0c-a17a-6c575fdd1ae0-0', usage_metadata={'input_tokens': 20, 'output_tokens': 7, 'total_tokens': 27})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer. \n",
"\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren. \\n', response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'usage_metadata': {'prompt_token_count': 15, 'candidates_token_count': 8, 'total_token_count': 23}}, id='run-c71955fd-8dc1-422b-88a7-853accf4811b-0', usage_metadata={'input_tokens': 15, 'output_tokens': 8, 'total_tokens': 23})"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatVertexAI features and configurations, like how to send multimodal inputs and configure safety settings, head to the API reference: https://python.langchain.com/api_reference/google_vertexai/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatVertexAI features and configurations, like how to send multimodal inputs and configure safety settings, head to the API reference: https://python.langchain.com/api_reference/google_vertexai/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv-2",
"language": "python",
"name": "poetry-venv-2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,266 +1,264 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Groq\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatGroq\n",
"\n",
"This will help you getting started with Groq [chat models](../../concepts/chat_models.mdx). For detailed documentation of all ChatGroq features and configurations head to the [API reference](https://python.langchain.com/api_reference/groq/chat_models/langchain_groq.chat_models.ChatGroq.html). For a list of all Groq models, visit this [link](https://console.groq.com/docs/models?utm_source=langchain).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/groq) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatGroq](https://python.langchain.com/api_reference/groq/chat_models/langchain_groq.chat_models.ChatGroq.html) | [langchain-groq](https://python.langchain.com/api_reference/groq/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-groq?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-groq?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | \n",
"\n",
"## Setup\n",
"\n",
"To access Groq models you'll need to create a Groq account, get an API key, and install the `langchain-groq` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to the [Groq console](https://console.groq.com/login?utm_source=langchain&utm_content=chat_page) to sign up to Groq and generate an API key. Once you've done this set the GROQ_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"GROQ_API_KEY\" not in os.environ:\n",
" os.environ[\"GROQ_API_KEY\"] = getpass.getpass(\"Enter your Groq API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Groq integration lives in the `langchain-groq` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3f3f510e-2afe-4e76-be41-c5a9665aea63",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-groq"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_groq import ChatGroq\n",
"\n",
"llm = ChatGroq(\n",
" model=\"llama-3.1-8b-instant\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
"cells": [
{
"data": {
"text/plain": [
"AIMessage(content='The translation of \"I love programming\" to French is:\\n\\n\"J\\'adore le programmation.\"', additional_kwargs={}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 55, 'total_tokens': 77, 'completion_time': 0.029333333, 'prompt_time': 0.003502892, 'queue_time': 0.553054073, 'total_time': 0.032836225}, 'model_name': 'llama-3.1-8b-instant', 'system_fingerprint': 'fp_a491995411', 'finish_reason': 'stop', 'logprobs': None}, id='run-2b2da04a-993c-40ab-becc-201eab8b1a1b-0', usage_metadata={'input_tokens': 55, 'output_tokens': 22, 'total_tokens': 77})"
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Groq\n",
"---"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"The translation of \"I love programming\" to French is:\n",
"\n",
"\"J'adore le programmation.\"\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](../../how_to/sequence.ipynb) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={}, response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 50, 'total_tokens': 56, 'completion_time': 0.008, 'prompt_time': 0.003337935, 'queue_time': 0.20949214500000002, 'total_time': 0.011337935}, 'model_name': 'llama-3.1-8b-instant', 'system_fingerprint': 'fp_a491995411', 'finish_reason': 'stop', 'logprobs': None}, id='run-e33b48dc-5e55-466e-9ebd-7b48c81c3cbd-0', usage_metadata={'input_tokens': 50, 'output_tokens': 6, 'total_tokens': 56})"
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatGroq\n",
"\n",
"This will help you getting started with Groq [chat models](../../concepts/chat_models.mdx). For detailed documentation of all ChatGroq features and configurations head to the [API reference](https://python.langchain.com/api_reference/groq/chat_models/langchain_groq.chat_models.ChatGroq.html). For a list of all Groq models, visit this [link](https://console.groq.com/docs/models?utm_source=langchain).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/groq) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatGroq](https://python.langchain.com/api_reference/groq/chat_models/langchain_groq.chat_models.ChatGroq.html) | [langchain-groq](https://python.langchain.com/api_reference/groq/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-groq?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-groq?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ |\n",
"\n",
"## Setup\n",
"\n",
"To access Groq models you'll need to create a Groq account, get an API key, and install the `langchain-groq` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to the [Groq console](https://console.groq.com/login?utm_source=langchain&utm_content=chat_page) to sign up to Groq and generate an API key. Once you've done this set the GROQ_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"GROQ_API_KEY\" not in os.environ:\n",
" os.environ[\"GROQ_API_KEY\"] = getpass.getpass(\"Enter your Groq API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Groq integration lives in the `langchain-groq` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3f3f510e-2afe-4e76-be41-c5a9665aea63",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-groq"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_groq import ChatGroq\n",
"\n",
"llm = ChatGroq(\n",
" model=\"llama-3.1-8b-instant\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='The translation of \"I love programming\" to French is:\\n\\n\"J\\'adore le programmation.\"', additional_kwargs={}, response_metadata={'token_usage': {'completion_tokens': 22, 'prompt_tokens': 55, 'total_tokens': 77, 'completion_time': 0.029333333, 'prompt_time': 0.003502892, 'queue_time': 0.553054073, 'total_time': 0.032836225}, 'model_name': 'llama-3.1-8b-instant', 'system_fingerprint': 'fp_a491995411', 'finish_reason': 'stop', 'logprobs': None}, id='run-2b2da04a-993c-40ab-becc-201eab8b1a1b-0', usage_metadata={'input_tokens': 55, 'output_tokens': 22, 'total_tokens': 77})"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The translation of \"I love programming\" to French is:\n",
"\n",
"\"J'adore le programmation.\"\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](../../how_to/sequence.ipynb) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmieren.', additional_kwargs={}, response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 50, 'total_tokens': 56, 'completion_time': 0.008, 'prompt_time': 0.003337935, 'queue_time': 0.20949214500000002, 'total_time': 0.011337935}, 'model_name': 'llama-3.1-8b-instant', 'system_fingerprint': 'fp_a491995411', 'finish_reason': 'stop', 'logprobs': None}, id='run-e33b48dc-5e55-466e-9ebd-7b48c81c3cbd-0', usage_metadata={'input_tokens': 50, 'output_tokens': 6, 'total_tokens': 56})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatGroq features and configurations head to the API reference: https://python.langchain.com/api_reference/groq/chat_models/langchain_groq.chat_models.ChatGroq.html"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatGroq features and configurations head to the API reference: https://python.langchain.com/api_reference/groq/chat_models/langchain_groq.chat_models.ChatGroq.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -3,7 +3,9 @@
{
"cell_type": "raw",
"id": "59148044",
"metadata": {},
"metadata": {
"id": "59148044"
},
"source": [
"---\n",
"sidebar_label: LiteLLM\n",
@@ -11,120 +13,139 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "bf733a38-db84-4363-89e2-de6735c37230",
"metadata": {},
"id": "5bcea387",
"metadata": {
"id": "5bcea387"
},
"source": [
"# ChatLiteLLM\n",
"\n",
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc. \n",
"[LiteLLM](https://github.com/BerriAI/litellm) is a library that simplifies calling Anthropic, Azure, Huggingface, Replicate, etc.\n",
"\n",
"This notebook covers how to get started with using Langchain + the LiteLLM I/O library. "
"This notebook covers how to get started with using Langchain + the LiteLLM I/O library.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support| Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatLiteLLM](https://python.langchain.com/docs/integrations/chat/litellm/) | [langchain-litellm](https://pypi.org/project/langchain-litellm/)| ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-litellm?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-litellm?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](https://python.langchain.com/docs/how_to/tool_calling/) | [Structured output](https://python.langchain.com/docs/how_to/structured_output/) | JSON mode | Image input | Audio input | Video input | [Token-level streaming](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm-also-supports-async-and-streaming-functionality) | [Native async](https://python.langchain.com/docs/integrations/chat/litellm/#chatlitellm-also-supports-async-and-streaming-functionality) | [Token usage](https://python.langchain.com/docs/how_to/chat_token_usage_tracking/) | [Logprobs](https://python.langchain.com/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"### Setup\n",
"To access ChatLiteLLM models you'll need to install the `langchain-litellm` package and create an OpenAI, Anthropic, Azure, Replicate, OpenRouter, Hugging Face, Together AI or Cohere account. Then you have to get an API key, and export it as an environment variable."
]
},
{
"cell_type": "markdown",
"id": "0a2f8164",
"metadata": {
"id": "0a2f8164"
},
"source": [
"## Credentials\n",
"\n",
"You have to choose the LLM provider you want and sign up with them to get their API key.\n",
"\n",
"### Example - Anthropic\n",
"Head to https://console.anthropic.com/ to sign up for Anthropic and generate an API key. Once you've done this set the ANTHROPIC_API_KEY environment variable.\n",
"\n",
"\n",
"### Example - OpenAI\n",
"Head to https://platform.openai.com/api-keys to sign up for OpenAI and generate an API key. Once you've done this set the OPENAI_API_KEY environment variable."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"id": "7595eddf",
"metadata": {
"tags": []
"id": "7595eddf"
},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatLiteLLM\n",
"from langchain_core.messages import HumanMessage"
"## set ENV variables\n",
"import os\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"your-openai-key\"\n",
"os.environ[\"ANTHROPIC_API_KEY\"] = \"your-anthropic-key\""
]
},
{
"cell_type": "markdown",
"id": "74c3ad30",
"metadata": {
"id": "74c3ad30"
},
"source": [
"### Installation\n",
"\n",
"The LangChain LiteLLM integration lives in the `langchain-litellm` package:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "70cf04e8-423a-4ff6-8b09-f11fb711c817",
"id": "ca3f8a25",
"metadata": {
"tags": []
"id": "ca3f8a25"
},
"outputs": [],
"source": [
"chat = ChatLiteLLM(model=\"gpt-3.5-turbo\")"
"%pip install -qU langchain-litellm"
]
},
{
"cell_type": "markdown",
"id": "bc1182b4",
"metadata": {
"id": "bc1182b4"
},
"source": [
"## Instantiation\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"outputs": [],
"source": [
"messages = [\n",
" HumanMessage(\n",
" content=\"Translate this sentence from English to French. I love programming.\"\n",
" )\n",
"]\n",
"chat(messages)"
"from langchain_litellm import ChatLiteLLM\n",
"\n",
"llm = ChatLiteLLM(model=\"gpt-3.5-turbo\")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c",
"metadata": {},
"id": "63d98454",
"metadata": {
"id": "63d98454"
},
"source": [
"## `ChatLiteLLM` also supports async and streaming functionality:"
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "93a21c5c-6ef9-4688-be60-b2e1f94842fb",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.callbacks import CallbackManager, StreamingStdOutCallbackHandler"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"LLMResult(generations=[[ChatGeneration(text=\" J'aime programmer.\", generation_info=None, message=AIMessage(content=\" J'aime programmer.\", additional_kwargs={}, example=False))]], llm_output={}, run=[RunInfo(run_id=UUID('8cc8fb68-1c35-439c-96a0-695036a93652'))])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"await chat.agenerate([messages])"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "025be980-e50d-4a68-93dc-c9c7b500ce34",
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "8199ef8f-eb8b-4253-9ea0-6c24a013ca4c",
"outputId": "a4c0e5f5-a859-43fa-dd78-74fc0922ecb2",
"tags": []
},
"outputs": [
@@ -132,41 +153,75 @@
"name": "stdout",
"output_type": "stream",
"text": [
" J'aime la programmation."
"content='Neutral' additional_kwargs={} response_metadata={'token_usage': Usage(completion_tokens=2, prompt_tokens=30, total_tokens=32, completion_tokens_details=CompletionTokensDetailsWrapper(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0, text_tokens=None), prompt_tokens_details=PromptTokensDetailsWrapper(audio_tokens=0, cached_tokens=0, text_tokens=None, image_tokens=None)), 'model': 'gpt-3.5-turbo', 'finish_reason': 'stop', 'model_name': 'gpt-3.5-turbo'} id='run-ab6a3b21-eae8-4c27-acb2-add65a38221a-0' usage_metadata={'input_tokens': 30, 'output_tokens': 2, 'total_tokens': 32}\n"
]
},
{
"data": {
"text/plain": [
"AIMessage(content=\" J'aime la programmation.\", additional_kwargs={}, example=False)"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatLiteLLM(\n",
" streaming=True,\n",
" verbose=True,\n",
" callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),\n",
"response = await llm.ainvoke(\n",
" \"Classify the text into neutral, negative or positive. Text: I think the food was okay. Sentiment:\"\n",
")\n",
"chat(messages)"
"print(response)"
]
},
{
"cell_type": "markdown",
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c",
"metadata": {
"id": "c361ab1e-8c0c-4206-9e3c-9d1424a12b9c"
},
"source": [
"## `ChatLiteLLM` also supports async and streaming functionality:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c253883f",
"metadata": {},
"outputs": [],
"source": []
"execution_count": 5,
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "c5fac0e9-05a4-4fc1-a3b3-e5bbb24b971b",
"outputId": "ee8cdda1-d992-4696-9ad0-aa146360a3ee",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Antibiotics are medications that fight bacterial infections in the body. They work by targeting specific bacteria and either killing them or preventing their growth and reproduction.\n",
"\n",
"There are several different mechanisms by which antibiotics work. Some antibiotics work by disrupting the cell walls of bacteria, causing them to burst and die. Others interfere with the protein synthesis of bacteria, preventing them from growing and reproducing. Some antibiotics target the DNA or RNA of bacteria, disrupting their ability to replicate.\n",
"\n",
"It is important to note that antibiotics only work against bacterial infections and not viral infections. It is also crucial to take antibiotics as prescribed by a healthcare professional and to complete the full course of treatment, even if symptoms improve before the medication is finished. This helps to prevent antibiotic resistance, where bacteria become resistant to the effects of antibiotics."
]
}
],
"source": [
"async for token in llm.astream(\"Hello, please explain how antibiotics work\"):\n",
" print(token.text(), end=\"\")"
]
},
{
"cell_type": "markdown",
"id": "88af2a9b",
"metadata": {
"id": "88af2a9b"
},
"source": [
"## API reference\n",
"For detailed documentation of all `ChatLiteLLM` features and configurations head to the API reference: https://github.com/Akshay-Dongare/langchain-litellm"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "g6_alda",
"language": "python",
"name": "python3"
},
@@ -180,7 +235,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.12.4"
}
},
"nbformat": 4,

View File

@@ -1,262 +1,260 @@
{
"cells": [
{
"cell_type": "raw",
"id": "53fbf15f",
"metadata": {},
"source": [
"---\n",
"sidebar_label: MistralAI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "d295c2a2",
"metadata": {},
"source": [
"# ChatMistralAI\n",
"\n",
"This will help you getting started with Mistral [chat models](/docs/concepts/chat_models). For detailed documentation of all `ChatMistralAI` features and configurations head to the [API reference](https://python.langchain.com/api_reference/mistralai/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html). The `ChatMistralAI` class is built on top of the [Mistral API](https://docs.mistral.ai/api/). For a list of all the models supported by Mistral, check out [this page](https://docs.mistral.ai/getting-started/models/).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/mistral) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatMistralAI](https://python.langchain.com/api_reference/mistralai/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html) | [langchain_mistralai](https://python.langchain.com/api_reference/mistralai/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_mistralai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_mistralai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"\n",
"To access `ChatMistralAI` models you'll need to create a Mistral account, get an API key, and install the `langchain_mistralai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"\n",
"A valid [API key](https://console.mistral.ai/api-keys/) is needed to communicate with the API. Once you've done this set the MISTRAL_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2461605e",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"MISTRAL_API_KEY\" not in os.environ:\n",
" os.environ[\"MISTRAL_API_KEY\"] = getpass.getpass(\"Enter your Mistral API key: \")"
]
},
{
"cell_type": "markdown",
"id": "788f37ac",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "007209d5",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0f5c74f9",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Mistral integration lives in the `langchain_mistralai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ab11a65",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_mistralai"
]
},
{
"cell_type": "markdown",
"id": "fb1a335e",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e6c38580",
"metadata": {},
"outputs": [],
"source": [
"from langchain_mistralai import ChatMistralAI\n",
"\n",
"llm = ChatMistralAI(\n",
" model=\"mistral-large-latest\",\n",
" temperature=0,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "aec79099",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "8838c3cc",
"metadata": {},
"outputs": [
"cells": [
{
"data": {
"text/plain": [
"AIMessage(content='Sure, I\\'d be happy to help you translate that sentence into French! The English sentence \"I love programming\" translates to \"J\\'aime programmer\" in French. Let me know if you have any other questions or need further assistance!', response_metadata={'token_usage': {'prompt_tokens': 32, 'total_tokens': 84, 'completion_tokens': 52}, 'model': 'mistral-small', 'finish_reason': 'stop'}, id='run-64bac156-7160-4b68-b67e-4161f63e021f-0', usage_metadata={'input_tokens': 32, 'output_tokens': 52, 'total_tokens': 84})"
"cell_type": "raw",
"id": "53fbf15f",
"metadata": {},
"source": [
"---\n",
"sidebar_label: MistralAI\n",
"---"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "bbf6a048",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, I'd be happy to help you translate that sentence into French! The English sentence \"I love programming\" translates to \"J'aime programmer\" in French. Let me know if you have any other questions or need further assistance!\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "32b87f87",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "24e2c51c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmierung. (German translation)', response_metadata={'token_usage': {'prompt_tokens': 26, 'total_tokens': 38, 'completion_tokens': 12}, 'model': 'mistral-small', 'finish_reason': 'stop'}, id='run-dfd4094f-e347-47b0-9056-8ebd7ea35fe7-0', usage_metadata={'input_tokens': 26, 'output_tokens': 12, 'total_tokens': 38})"
"cell_type": "markdown",
"id": "d295c2a2",
"metadata": {},
"source": [
"# ChatMistralAI\n",
"\n",
"This will help you getting started with Mistral [chat models](/docs/concepts/chat_models). For detailed documentation of all `ChatMistralAI` features and configurations head to the [API reference](https://python.langchain.com/api_reference/mistralai/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html). The `ChatMistralAI` class is built on top of the [Mistral API](https://docs.mistral.ai/api/). For a list of all the models supported by Mistral, check out [this page](https://docs.mistral.ai/getting-started/models/).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/mistral) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatMistralAI](https://python.langchain.com/api_reference/mistralai/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html) | [langchain_mistralai](https://python.langchain.com/api_reference/mistralai/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_mistralai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_mistralai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"\n",
"To access `ChatMistralAI` models you'll need to create a Mistral account, get an API key, and install the `langchain_mistralai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"\n",
"A valid [API key](https://console.mistral.ai/api-keys/) is needed to communicate with the API. Once you've done this set the MISTRAL_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2461605e",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"MISTRAL_API_KEY\" not in os.environ:\n",
" os.environ[\"MISTRAL_API_KEY\"] = getpass.getpass(\"Enter your Mistral API key: \")"
]
},
{
"cell_type": "markdown",
"id": "788f37ac",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": null,
"id": "007209d5",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0f5c74f9",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Mistral integration lives in the `langchain_mistralai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1ab11a65",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_mistralai"
]
},
{
"cell_type": "markdown",
"id": "fb1a335e",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e6c38580",
"metadata": {},
"outputs": [],
"source": [
"from langchain_mistralai import ChatMistralAI\n",
"\n",
"llm = ChatMistralAI(\n",
" model=\"mistral-large-latest\",\n",
" temperature=0,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "aec79099",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "8838c3cc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Sure, I\\'d be happy to help you translate that sentence into French! The English sentence \"I love programming\" translates to \"J\\'aime programmer\" in French. Let me know if you have any other questions or need further assistance!', response_metadata={'token_usage': {'prompt_tokens': 32, 'total_tokens': 84, 'completion_tokens': 52}, 'model': 'mistral-small', 'finish_reason': 'stop'}, id='run-64bac156-7160-4b68-b67e-4161f63e021f-0', usage_metadata={'input_tokens': 32, 'output_tokens': 52, 'total_tokens': 84})"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "bbf6a048",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sure, I'd be happy to help you translate that sentence into French! The English sentence \"I love programming\" translates to \"J'aime programmer\" in French. Let me know if you have any other questions or need further assistance!\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "32b87f87",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "24e2c51c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe Programmierung. (German translation)', response_metadata={'token_usage': {'prompt_tokens': 26, 'total_tokens': 38, 'completion_tokens': 12}, 'model': 'mistral-small', 'finish_reason': 'stop'}, id='run-dfd4094f-e347-47b0-9056-8ebd7ea35fe7-0', usage_metadata={'input_tokens': 26, 'output_tokens': 12, 'total_tokens': 38})"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "cb9b5834",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"Head to the [API reference](https://python.langchain.com/api_reference/mistralai/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html) for detailed documentation of all attributes and methods."
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
{
"cell_type": "markdown",
"id": "cb9b5834",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"Head to the [API reference](https://python.langchain.com/api_reference/mistralai/chat_models/langchain_mistralai.chat_models.ChatMistralAI.html) for detailed documentation of all attributes and methods."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,444 +1,356 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Naver\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "c8444f1a-e907-4f07-b8b6-68fbedfb868e",
"metadata": {},
"source": [
"# ChatClovaX\n",
"\n",
"This notebook provides a quick overview for getting started with Navers HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/chat_models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html).\n",
"\n",
"[CLOVA Studio](http://clovastudio.ncloud.com/) has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio API Guide [documentation](https://api.ncloud-docs.com/docs/clovastudio-chatcompletions).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- |:-----:| :---: |:------------------------------------------------------------------------:| :---: | :---: |\n",
"| [ChatClovaX](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html) | [langchain-community](https://python.langchain.com/api_reference/community/index.html) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"|:------------------------------------------:| :---: | :---: | :---: | :---: | :---: |:-----------------------------------------------------:| :---: |:------------------------------------------------------:|:----------------------------------:|\n",
"|❌| ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"Before using the chat model, you must go through the four steps below.\n",
"\n",
"1. Creating [NAVER Cloud Platform](https://www.ncloud.com/) account \n",
"2. Apply to use [CLOVA Studio](https://www.ncloud.com/product/aiService/clovaStudio)\n",
"3. Create a CLOVA Studio Test App or Service App of a model to use (See [here](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#테스트앱생성).)\n",
"4. Issue a Test or Service API key (See [here](https://api.ncloud-docs.com/docs/ai-naver-clovastudio-summary#API%ED%82%A4).)\n",
"\n",
"### Credentials\n",
"\n",
"Set the `NCP_CLOVASTUDIO_API_KEY` environment variable with your API key.\n",
" - Note that if you are using a legacy API Key (that doesn't start with `nv-*` prefix), you might need to get an additional API Key by clicking `App Request Status` > `Service App, Test App List` > `Details button for each app` in [CLOVA Studio](https://clovastudio.ncloud.com/studio-application/service-app) and set it as `NCP_APIGW_API_KEY`.\n",
"\n",
"You can add them to your environment variables as below:\n",
"\n",
"``` bash\n",
"export NCP_CLOVASTUDIO_API_KEY=\"your-api-key-here\"\n",
"# Uncomment below to use a legacy API key\n",
"# export NCP_APIGW_API_KEY=\"your-api-key-here\"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2def81b5-b023-4f40-a97b-b2c5ca59d6a9",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"NCP_CLOVASTUDIO_API_KEY\"):\n",
" os.environ[\"NCP_CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
" \"Enter your NCP CLOVA Studio API Key: \"\n",
" )\n",
"# Uncomment below to use a legacy API key\n",
"# if not os.getenv(\"NCP_APIGW_API_KEY\"):\n",
"# os.environ[\"NCP_APIGW_API_KEY\"] = getpass.getpass(\n",
"# \"Enter your NCP API Gateway API key: \"\n",
"# )"
]
},
{
"cell_type": "markdown",
"id": "7c695442",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6151aeb6",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "17bf9053-90c5-4955-b239-55a35cb07566",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Naver integration lives in the `langchain-community` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# install package\n",
"!pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatClovaX\n",
"\n",
"chat = ChatClovaX(\n",
" model=\"HCX-003\",\n",
" max_tokens=100,\n",
" temperature=0.5,\n",
" # clovastudio_api_key=\"...\" # set if you prefer to pass api key directly instead of using environment variables\n",
" # task_id=\"...\" # set if you want to use fine-tuned model\n",
" # service_app=False # set True if using Service App. Default value is False (means using Test App)\n",
" # include_ai_filters=False # set True if you want to detect inappropriate content. Default value is False\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "47752b59",
"metadata": {},
"source": [
"## Invocation\n",
"\n",
"In addition to invoke, we also support batch and stream functionalities."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
"cells": [
{
"data": {
"text/plain": [
"AIMessage(content='저는 네이버 AI를 사용하는 것이 좋아요.', additional_kwargs={}, response_metadata={'stop_reason': 'stop_before', 'input_length': 25, 'output_length': 14, 'seed': 1112164354, 'ai_filter': None}, id='run-b57bc356-1148-4007-837d-cc409dbd57cc-0', usage_metadata={'input_tokens': 25, 'output_tokens': 14, 'total_tokens': 39})"
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Naver\n",
"---"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to Korean. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love using NAVER AI.\"),\n",
"]\n",
"\n",
"ai_msg = chat.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "24e7377f",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"저는 네이버 AI를 사용하는 것이 좋아요.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='저는 네이버 AI를 사용하는 것이 좋아요.', additional_kwargs={}, response_metadata={'stop_reason': 'stop_before', 'input_length': 25, 'output_length': 14, 'seed': 2575184681, 'ai_filter': None}, id='run-7014b330-eba3-4701-bb62-df73ce39b854-0', usage_metadata={'input_tokens': 25, 'output_tokens': 14, 'total_tokens': 39})"
"cell_type": "markdown",
"id": "c8444f1a-e907-4f07-b8b6-68fbedfb868e",
"metadata": {},
"source": [
"# ChatClovaX\n",
"\n",
"This notebook provides a quick overview for getting started with Navers HyperCLOVA X [chat models](https://python.langchain.com/docs/concepts/chat_models) via CLOVA Studio. For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain).\n",
"\n",
"[CLOVA Studio](http://clovastudio.ncloud.com/) has several chat models. You can find information about latest models and their costs, context windows, and supported input types in the CLOVA Studio Guide [documentation](https://guide.ncloud-docs.com/docs/clovastudio-model).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- |:-----:| :---: |:------------------------------------------------------------------------:| :---: | :---: |\n",
"| [ChatClovaX](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain#HyperCLOVAX%EB%AA%A8%EB%8D%B8%EC%9D%B4%EC%9A%A9) | [langchain-naver](https://pypi.org/project/langchain-naver/) | ❌ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_naver?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_naver?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"|:------------------------------------------:| :---: | :---: | :---: | :---: | :---: |:-----------------------------------------------------:| :---: |:------------------------------------------------------:|:----------------------------------:|\n",
"|✅| ❌ | ❌ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"Before using the chat model, you must go through the four steps below.\n",
"\n",
"1. Creating [NAVER Cloud Platform](https://www.ncloud.com/) account\n",
"2. Apply to use [CLOVA Studio](https://www.ncloud.com/product/aiService/clovaStudio)\n",
"3. Create a CLOVA Studio Test App or Service App of a model to use (See [here](https://guide.ncloud-docs.com/docs/clovastudio-playground-testapp).)\n",
"4. Issue a Test or Service API key (See [here](https://api.ncloud-docs.com/docs/ai-naver-clovastudio-summary#API%ED%82%A4).)\n",
"\n",
"### Credentials\n",
"\n",
"Set the `CLOVASTUDIO_API_KEY` environment variable with your API key.\n",
"\n",
"You can add them to your environment variables as below:\n",
"\n",
"``` bash\n",
"export CLOVASTUDIO_API_KEY=\"your-api-key-here\"\n",
"```"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"Korean\",\n",
" \"input\": \"I love using NAVER AI.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "66e69286",
"metadata": {},
"source": [
"## Streaming"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2c07af21-dda5-4514-b4de-1f214c2cebcd",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Certainly! In Korean, \"Hi\" is pronounced as \"안녕\" (annyeong). The first syllable, \"안,\" sounds like the \"ahh\" sound in \"apple,\" while the second syllable, \"녕,\" sounds like the \"yuh\" sound in \"you.\" So when you put them together, it's like saying \"ahhyuh-nyuhng.\" Remember to pronounce each syllable clearly and separately for accurate pronunciation."
]
}
],
"source": [
"system = \"You are a helpful assistant that can teach Korean pronunciation.\"\n",
"human = \"Could you let me know how to say '{phrase}' in Korean?\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chain = prompt | chat\n",
"\n",
"for chunk in chain.stream({\"phrase\": \"Hi\"}):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
"## Additional functionalities\n",
"\n",
"### Using fine-tuned models\n",
"\n",
"You can call fine-tuned models by passing in your corresponding `task_id` parameter. (You dont need to specify the `model_name` parameter when calling fine-tuned model.)\n",
"\n",
"You can check `task_id` from corresponding Test App or Service App details."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "cb436788",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='저는 네이버 AI를 사용하는 것이 너무 좋아요.', additional_kwargs={}, response_metadata={'stop_reason': 'stop_before', 'input_length': 25, 'output_length': 15, 'seed': 52559061, 'ai_filter': None}, id='run-5bea8d4a-48f3-4c34-ae70-66e60dca5344-0', usage_metadata={'input_tokens': 25, 'output_tokens': 15, 'total_tokens': 40})"
"cell_type": "code",
"execution_count": 2,
"id": "2def81b5-b023-4f40-a97b-b2c5ca59d6a9",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"CLOVASTUDIO_API_KEY\"):\n",
" os.environ[\"CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
" \"Enter your CLOVA Studio API Key: \"\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "7c695442",
"metadata": {},
"source": [
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6151aeb6",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "17bf9053-90c5-4955-b239-55a35cb07566",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Naver integration lives in the `langchain-naver` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# install package\n",
"%pip install -qU langchain-naver"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_naver import ChatClovaX\n",
"\n",
"chat = ChatClovaX(\n",
" model=\"HCX-005\",\n",
" temperature=0.5,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "47752b59",
"metadata": {},
"source": [
"## Invocation\n",
"\n",
"In addition to invoke, `ChatClovaX` also support batch and stream functionalities."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='네이버 인공지능을 사용하는 것을 정말 좋아합니다.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 28, 'total_tokens': 39, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'HCX-005', 'system_fingerprint': None, 'id': 'b70c26671cd247a0864115bacfb5fc12', 'finish_reason': 'stop', 'logprobs': None}, id='run-3faf6a8d-d5da-49ad-9fbb-7b56ed23b484-0', usage_metadata={'input_tokens': 28, 'output_tokens': 11, 'total_tokens': 39, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to Korean. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love using NAVER AI.\"),\n",
"]\n",
"\n",
"ai_msg = chat.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "24e7377f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"네이버 인공지능을 사용하는 것을 정말 좋아합니다.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='저는 네이버 인공지능을 사용하는 것을 좋아합니다.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 28, 'total_tokens': 38, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'HCX-005', 'system_fingerprint': None, 'id': 'b7a826d17fcf4fee8386fca2ebc63284', 'finish_reason': 'stop', 'logprobs': None}, id='run-35957816-3325-4d9c-9441-e40704912be6-0', usage_metadata={'input_tokens': 28, 'output_tokens': 10, 'total_tokens': 38, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"Korean\",\n",
" \"input\": \"I love using NAVER AI.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "66e69286",
"metadata": {},
"source": [
"## Streaming"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "2c07af21-dda5-4514-b4de-1f214c2cebcd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"In Korean, the informal way of saying 'hi' is \"안녕\" (annyeong). If you're addressing someone older or showing more respect, you would use \"안녕하세요\" (annjeonghaseyo). Both phrases are used as greetings similar to 'hello'. Remember, pronunciation is key so make sure to pronounce each syllable clearly: 안-녀-엉 (an-nyeo-eong) and 안-녕-하-세-요 (an-nyeong-ha-se-yo)."
]
}
],
"source": [
"system = \"You are a helpful assistant that can teach Korean pronunciation.\"\n",
"human = \"Could you let me know how to say '{phrase}' in Korean?\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chain = prompt | chat\n",
"\n",
"for chunk in chain.stream({\"phrase\": \"Hi\"}):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": [
"## Additional functionalities\n",
"\n",
"### Using fine-tuned models\n",
"\n",
"You can call fine-tuned models by passing the `task_id` to the `model` parameter as: `ft:{task_id}`.\n",
"\n",
"You can check `task_id` from corresponding Test App or Service App details."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cb436788",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='네이버 인공지능을 사용하는 것을 정말 좋아합니다.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 28, 'total_tokens': 39, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'HCX-005', 'system_fingerprint': None, 'id': '2222d6d411a948c883aac1e03ca6cebe', 'finish_reason': 'stop', 'logprobs': None}, id='run-9696d7e2-7afa-4bb4-9c03-b95fcf678ab8-0', usage_metadata={'input_tokens': 28, 'output_tokens': 11, 'total_tokens': 39, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"fine_tuned_model = ChatClovaX(\n",
" model=\"ft:a1b2c3d4\", # set as `ft:{task_id}` with your fine-tuned model's task id\n",
" # other params...\n",
")\n",
"\n",
"fine_tuned_model.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatClovaX features and configurations head to the [API reference](https://guide.ncloud-docs.com/docs/clovastudio-dev-langchain)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"fine_tuned_model = ChatClovaX(\n",
" task_id=\"5s8egt3a\", # set if you want to use fine-tuned model\n",
" # other params...\n",
")\n",
"\n",
"fine_tuned_model.invoke(messages)"
]
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.8"
}
},
{
"cell_type": "markdown",
"id": "f428deaf",
"metadata": {},
"source": [
"### Service App\n",
"\n",
"When going live with production-level application using CLOVA Studio, you should apply for and use Service App. (See [here](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#서비스앱신청).)\n",
"\n",
"For a Service App, you should use a corresponding Service API key and can only be called with it."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dcf566df",
"metadata": {},
"outputs": [],
"source": [
"# Update environment variables\n",
"\n",
"os.environ[\"NCP_CLOVASTUDIO_API_KEY\"] = getpass.getpass(\n",
" \"Enter NCP CLOVA Studio Service API Key: \"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "cebe27ae",
"metadata": {},
"outputs": [],
"source": [
"chat = ChatClovaX(\n",
" service_app=True, # True if you want to use your service app, default value is False.\n",
" # clovastudio_api_key=\"...\" # if you prefer to pass api key in directly instead of using env vars\n",
" model=\"HCX-003\",\n",
" # other params...\n",
")\n",
"ai_msg = chat.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "d73e7140",
"metadata": {},
"source": [
"### AI Filter\n",
"\n",
"AI Filter detects inappropriate output such as profanity from the test app (or service app included) created in Playground and informs the user. See [here](https://guide.ncloud-docs.com/docs/en/clovastudio-playground01#AIFilter) for details. "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "32bfbc93",
"metadata": {},
"outputs": [],
"source": [
"chat = ChatClovaX(\n",
" model=\"HCX-003\",\n",
" include_ai_filters=True, # True if you want to enable ai filter\n",
" # other params...\n",
")\n",
"\n",
"ai_msg = chat.invoke(messages)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7bd9e179",
"metadata": {},
"outputs": [],
"source": [
"print(ai_msg.response_metadata[\"ai_filter\"])"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatNaver features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.naver.ChatClovaX.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.3"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,326 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Netmind\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatNetmind\n",
"\n",
"This will help you getting started with Netmind [chat models](https://www.netmind.ai/). For detailed documentation of all ChatNetmind features and configurations head to the [API reference](https://github.com/protagolabs/langchain-netmind).\n",
"\n",
"- See https://www.netmind.ai/ for an example.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/) | Package downloads | Package latest |\n",
"|:---------------------------------------------------------------------------------------------| :--- |:-----:|:------------:|:--------------------------------------------------------------:| :---: | :---: |\n",
"| [ChatNetmind](https://python.langchain.com/api_reference/) | [langchain-netmind](https://python.langchain.com/api_reference/) | ✅ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-netmind?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-netmind?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"|:-----------------------------------------------:|:---------------------------------------------------------:|:---------:|:---------------------------------------------------:|:-----------:|:-----------:|:----------------------------------------------------------:|:------------:|:-----------------------------------------------------------:|:---------------------------------------:|\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | \n",
"\n",
"## Setup\n",
"\n",
"To access Netmind models you'll need to create a/an Netmind account, get an API key, and install the `langchain-netmind` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to https://www.netmind.ai/ to sign up to Netmind and generate an API key. Once you've done this set the NETMIND_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {
"ExecuteTime": {
"end_time": "2025-03-20T02:00:30.732333Z",
"start_time": "2025-03-20T02:00:28.384208Z"
}
},
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"NETMIND_API_KEY\"):\n",
" os.environ[\"NETMIND_API_KEY\"] = getpass.getpass(\"Enter your Netmind API key: \")"
],
"outputs": [],
"execution_count": 1
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {
"ExecuteTime": {
"end_time": "2025-03-20T02:00:33.421446Z",
"start_time": "2025-03-20T02:00:33.419081Z"
}
},
"source": [
"# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
],
"outputs": [],
"execution_count": 2
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Netmind integration lives in the `langchain-netmind` package:"
]
},
{
"cell_type": "code",
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {
"ExecuteTime": {
"end_time": "2025-03-20T02:00:35.923300Z",
"start_time": "2025-03-20T02:00:34.505928Z"
}
},
"source": [
"%pip install -qU langchain-netmind"
],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\r\n",
"\u001B[1m[\u001B[0m\u001B[34;49mnotice\u001B[0m\u001B[1;39;49m]\u001B[0m\u001B[39;49m A new release of pip is available: \u001B[0m\u001B[31;49m24.0\u001B[0m\u001B[39;49m -> \u001B[0m\u001B[32;49m25.0.1\u001B[0m\r\n",
"\u001B[1m[\u001B[0m\u001B[34;49mnotice\u001B[0m\u001B[1;39;49m]\u001B[0m\u001B[39;49m To update, run: \u001B[0m\u001B[32;49mpip install --upgrade pip\u001B[0m\r\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"execution_count": 3
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n"
]
},
{
"cell_type": "code",
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {
"ExecuteTime": {
"end_time": "2025-03-20T02:01:08.007764Z",
"start_time": "2025-03-20T02:01:07.391951Z"
}
},
"source": [
"from langchain_netmind import ChatNetmind\n",
"\n",
"llm = ChatNetmind(\n",
" model=\"deepseek-ai/DeepSeek-V3\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
],
"outputs": [],
"execution_count": 4
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": "## Invocation\n"
},
{
"cell_type": "code",
"id": "62e0dbc3",
"metadata": {
"tags": [],
"ExecuteTime": {
"end_time": "2025-03-20T02:01:19.011273Z",
"start_time": "2025-03-20T02:01:10.295510Z"
}
},
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
],
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer.\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 31, 'total_tokens': 44, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'deepseek-ai/DeepSeek-V3', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-ca6c2010-844d-4bf6-baac-6e248491b000-0', usage_metadata={'input_tokens': 31, 'output_tokens': 13, 'total_tokens': 44, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"execution_count": 5
},
{
"cell_type": "code",
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {
"ExecuteTime": {
"end_time": "2025-03-20T02:01:20.240190Z",
"start_time": "2025-03-20T02:01:20.238242Z"
}
},
"source": [
"print(ai_msg.content)"
],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer.\n"
]
}
],
"execution_count": 6
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:\n"
]
},
{
"cell_type": "code",
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {
"ExecuteTime": {
"end_time": "2025-03-20T02:01:27.456393Z",
"start_time": "2025-03-20T02:01:23.993410Z"
}
},
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
],
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe es zu programmieren.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 14, 'prompt_tokens': 26, 'total_tokens': 40, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'deepseek-ai/DeepSeek-V3', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-d63adcc6-53ba-4caa-9a79-78d640b39274-0', usage_metadata={'input_tokens': 26, 'output_tokens': 14, 'total_tokens': 40, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"execution_count": 7
},
{
"cell_type": "markdown",
"id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
"metadata": {},
"source": ""
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatNetmind features and configurations head to the API reference: \n",
"* [API reference](https://python.langchain.com/api_reference/) \n",
"* [langchain-netmind](https://github.com/protagolabs/langchain-netmind) \n",
"* [pypi](https://pypi.org/project/langchain-netmind/)"
]
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"source": "",
"id": "30f8be8c940bfbf3"
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

File diff suppressed because it is too large Load Diff

View File

@@ -19,7 +19,7 @@
"\n",
"[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.\n",
"\n",
"Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. \n",
"Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile.\n",
"\n",
"It optimizes setup and configuration details, including GPU usage.\n",
"\n",
@@ -48,7 +48,7 @@
"* This will download the default tagged version of the model. Typically, the default points to the latest, smallest sized-parameter model.\n",
"\n",
"> On Mac, the models will be download to `~/.ollama/models`\n",
"> \n",
">\n",
"> On Linux (or WSL), the models will be stored at `/usr/share/ollama/.ollama/models`\n",
"\n",
"* Specify the exact version of the model of interest as such `ollama pull vicuna:13b-v1.5-16k-q4_0` (View the [various tags for the `Vicuna`](https://ollama.ai/library/vicuna/tags) model in this instance)\n",
@@ -62,7 +62,7 @@
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
]
},
{
@@ -97,18 +97,22 @@
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "Make sure you're using the latest Ollama version for structured outputs. Update by running:",
"id": "b18bd692076f7cf7"
"id": "b18bd692076f7cf7",
"metadata": {},
"source": [
"Make sure you're using the latest Ollama version for structured outputs. Update by running:"
]
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"source": "%pip install -U ollama",
"id": "b7a05cba95644c2e"
"id": "b7a05cba95644c2e",
"metadata": {},
"outputs": [],
"source": [
"%pip install -U ollama"
]
},
{
"cell_type": "markdown",
@@ -117,9 +121,7 @@
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n",
"\n",
"- TODO: Update model instantiation with relevant params."
"Now we can instantiate our model object and generate chat completions:\n"
]
},
{
@@ -256,7 +258,7 @@
"source": [
"## Tool calling\n",
"\n",
"We can use [tool calling](https://blog.langchain.dev/improving-core-tool-interfaces-and-docs-in-langchain/) with an LLM [that has been fine-tuned for tool use](https://ollama.com/library/llama3.1): \n",
"We can use [tool calling](https://blog.langchain.dev/improving-core-tool-interfaces-and-docs-in-langchain/) with an LLM [that has been fine-tuned for tool use](https://ollama.com/library/llama3.1):\n",
"\n",
"```\n",
"ollama pull llama3.1\n",
@@ -442,6 +444,63 @@
"print(query_chain)"
]
},
{
"cell_type": "markdown",
"id": "fb6a331f-1507-411f-89e5-c4d598154f3c",
"metadata": {},
"source": [
"## Reasoning models and custom message roles\n",
"\n",
"Some models, such as IBM's [Granite 3.2](https://ollama.com/library/granite3.2), support custom message roles to enable thinking processes.\n",
"\n",
"To access Granite 3.2's thinking features, pass a message with a `\"control\"` role with content set to `\"thinking\"`. Because `\"control\"` is a non-standard message role, we can use a [ChatMessage](https://python.langchain.com/api_reference/core/messages/langchain_core.messages.chat.ChatMessage.html) object to implement it:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d7309fa7-990e-4c20-b1f0-b155624ecf37",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Here is my thought process:\n",
"This question is asking for the result of 3 raised to the power of 3, which is a basic mathematical operation. \n",
"\n",
"Here is my response:\n",
"The expression 3^3 means 3 raised to the power of 3. To calculate this, you multiply the base number (3) by itself as many times as its exponent (3):\n",
"\n",
"3 * 3 * 3 = 27\n",
"\n",
"So, 3^3 equals 27.\n"
]
}
],
"source": [
"from langchain_core.messages import ChatMessage, HumanMessage\n",
"from langchain_ollama import ChatOllama\n",
"\n",
"llm = ChatOllama(model=\"granite3.2:8b\")\n",
"\n",
"messages = [\n",
" ChatMessage(role=\"control\", content=\"thinking\"),\n",
" HumanMessage(\"What is 3^3?\"),\n",
"]\n",
"\n",
"response = llm.invoke(messages)\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "6271d032-da40-44d4-9b52-58370e164be3",
"metadata": {},
"source": [
"Note that the model exposes its thought process in addition to its final response."
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
@@ -469,7 +528,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
"version": "3.10.4"
}
},
"nbformat": 4,

View File

@@ -408,7 +408,7 @@
"\n",
":::\n",
"\n",
"OpenAI supports a [Responses](https://platform.openai.com/docs/guides/responses-vs-chat-completions) API that is oriented toward building [agentic](/docs/concepts/agents/) applications. It includes a suite of [built-in tools](https://platform.openai.com/docs/guides/tools?api-mode=responses), including web and file search. It also supports management of [conversation state](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses), allowing you to continue a conversational thread without explicitly passing in previous messages.\n",
"OpenAI supports a [Responses](https://platform.openai.com/docs/guides/responses-vs-chat-completions) API that is oriented toward building [agentic](/docs/concepts/agents/) applications. It includes a suite of [built-in tools](https://platform.openai.com/docs/guides/tools?api-mode=responses), including web and file search. It also supports management of [conversation state](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses), allowing you to continue a conversational thread without explicitly passing in previous messages, as well as the output from [reasoning processes](https://platform.openai.com/docs/guides/reasoning?api-mode=responses).\n",
"\n",
"`ChatOpenAI` will route to the Responses API if one of these features is used. You can also specify `use_responses_api=True` when instantiating `ChatOpenAI`.\n",
"\n",
@@ -1056,6 +1056,77 @@
"print(second_response.text())"
]
},
{
"cell_type": "markdown",
"id": "67bf5bd2-0935-40a0-b1cd-c6662b681d4b",
"metadata": {},
"source": [
"### Reasoning output\n",
"\n",
"Some OpenAI models will generate separate text content illustrating their reasoning process. See OpenAI's [reasoning documentation](https://platform.openai.com/docs/guides/reasoning?api-mode=responses) for details.\n",
"\n",
"OpenAI can return a summary of the model's reasoning (although it doesn't expose the raw reasoning tokens). To configure `ChatOpenAI` to return this summary, specify the `reasoning` parameter:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "8d322f3a-0732-45ab-ac95-dfd4596e0d85",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'3^3 = 3 × 3 × 3 = 27.'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"reasoning = {\n",
" \"effort\": \"medium\", # 'low', 'medium', or 'high'\n",
" \"summary\": \"auto\", # 'detailed', 'auto', or None\n",
"}\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"o4-mini\",\n",
" use_responses_api=True,\n",
" model_kwargs={\"reasoning\": reasoning},\n",
")\n",
"response = llm.invoke(\"What is 3^3?\")\n",
"\n",
"# Output\n",
"response.text()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d7dcc082-b7c8-41b7-a5e2-441b9679e41b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"**Calculating power of three**\n",
"\n",
"The user is asking for the result of 3 to the power of 3, which I know is 27. It's a straightforward question, so Ill keep my answer concise: 27. I could explain that this is the same as multiplying 3 by itself twice: 3 × 3 × 3 equals 27. However, since the user likely just needs the answer, Ill simply respond with 27.\n"
]
}
],
"source": [
"# Reasoning\n",
"reasoning = response.additional_kwargs[\"reasoning\"]\n",
"for block in reasoning[\"summary\"]:\n",
" print(block[\"text\"])"
]
},
{
"cell_type": "markdown",
"id": "57e27714",

View File

@@ -17,12 +17,66 @@
"source": [
"# ChatPerplexity\n",
"\n",
"This notebook covers how to get started with `Perplexity` chat models."
"\n",
"This page will help you get started with Perplexity [chat models](../../concepts/chat_models.mdx). For detailed documentation of all `ChatPerplexity` features and configurations head to the [API reference](https://python.langchain.com/api_reference/perplexity/chat_models/langchain_perplexity.chat_models.ChatPerplexity.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/xai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatPerplexity](https://python.langchain.com/api_reference/perplexity/chat_models/langchain_perplexity.chat_models.ChatPerplexity.html) | [langchain-perplexity](https://python.langchain.com/api_reference/perplexity/perplexity.html) | ❌ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-perplexity?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-perplexity?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ❌ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"To access Perplexity models you'll need to create a Perplexity account, get an API key, and install the `langchain-perplexity` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [this page](https://www.perplexity.ai/) to sign up for Perplexity and generate an API key. Once you've done this set the `PPLX_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"id": "2243f329",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"PPLX_API_KEY\" not in os.environ:\n",
" os.environ[\"PPLX_API_KEY\"] = getpass.getpass(\"Enter your Perplexity API key: \")"
]
},
{
"cell_type": "markdown",
"id": "7dfe47c4",
"metadata": {},
"source": [
"To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "10a791fa",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d4a7c55d-b235-4ca4-a579-c90cc9570da9",
"metadata": {
"ExecuteTime": {
@@ -33,8 +87,8 @@
},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatPerplexity\n",
"from langchain_core.prompts import ChatPromptTemplate"
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_perplexity import ChatPerplexity"
]
},
{
@@ -62,29 +116,9 @@
"id": "97a8ce3a",
"metadata": {},
"source": [
"The code provided assumes that your PPLX_API_KEY is set in your environment variables. If you would like to manually specify your API key and also choose a different model, you can use the following code:\n",
"\n",
"```python\n",
"chat = ChatPerplexity(temperature=0, pplx_api_key=\"YOUR_API_KEY\", model=\"llama-3.1-sonar-small-128k-online\")\n",
"```\n",
"\n",
"You can check a list of available models [here](https://docs.perplexity.ai/docs/model-cards). For reproducibility, we can set the API key dynamically by taking it as an input in this notebook."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d3e49d78",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"PPLX_API_KEY = getpass()\n",
"os.environ[\"PPLX_API_KEY\"] = PPLX_API_KEY"
]
},
{
"cell_type": "code",
"execution_count": 3,
@@ -305,7 +339,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": ".venv",
"language": "python",
"name": "python3"
},
@@ -319,7 +353,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.11.11"
}
},
"nbformat": 4,

View File

@@ -57,8 +57,8 @@
{
"metadata": {
"ExecuteTime": {
"end_time": "2024-11-08T19:44:51.390231Z",
"start_time": "2024-11-08T19:44:51.387945Z"
"end_time": "2025-04-21T18:23:30.746350Z",
"start_time": "2025-04-21T18:23:30.744744Z"
}
},
"cell_type": "code",
@@ -70,7 +70,7 @@
],
"id": "fa57fba89276da13",
"outputs": [],
"execution_count": 1
"execution_count": 2
},
{
"metadata": {},
@@ -82,12 +82,25 @@
"id": "87dc1742af7b053"
},
{
"metadata": {},
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T18:23:33.359278Z",
"start_time": "2025-04-21T18:23:32.853207Z"
}
},
"cell_type": "code",
"source": "%pip install -qU langchain-predictionguard",
"id": "b816ae8553cba021",
"outputs": [],
"execution_count": null
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"execution_count": 3
},
{
"cell_type": "markdown",
@@ -103,13 +116,13 @@
"metadata": {
"id": "2xe8JEUwA7_y",
"ExecuteTime": {
"end_time": "2024-11-08T19:44:53.950653Z",
"start_time": "2024-11-08T19:44:53.488694Z"
"end_time": "2025-04-21T18:23:39.812675Z",
"start_time": "2025-04-21T18:23:39.666881Z"
}
},
"source": "from langchain_predictionguard import ChatPredictionGuard",
"outputs": [],
"execution_count": 2
"execution_count": 4
},
{
"cell_type": "code",
@@ -117,8 +130,8 @@
"metadata": {
"id": "Ua7Mw1N4HcER",
"ExecuteTime": {
"end_time": "2024-11-08T19:44:54.890695Z",
"start_time": "2024-11-08T19:44:54.502846Z"
"end_time": "2025-04-21T18:23:41.590296Z",
"start_time": "2025-04-21T18:23:41.253237Z"
}
},
"source": [
@@ -126,7 +139,7 @@
"chat = ChatPredictionGuard(model=\"Hermes-3-Llama-3.1-8B\")"
],
"outputs": [],
"execution_count": 3
"execution_count": 5
},
{
"metadata": {},
@@ -221,6 +234,132 @@
],
"execution_count": 6
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"## Tool Calling\n",
"\n",
"Prediction Guard has a tool calling API that lets you describe tools and their arguments, which enables the model return a JSON object with a tool to call and the inputs to that tool. Tool-calling is very useful for building tool-using chains and agents, and for getting structured outputs from models more generally.\n"
],
"id": "1227780d6e6728ba"
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"### ChatPredictionGuard.bind_tools()\n",
"\n",
"Using `ChatPredictionGuard.bind_tools()`, you can pass in Pydantic classes, dict schemas, and Langchain tools as tools to the model, which are then reformatted to allow for use by the model."
],
"id": "23446aa52e01d1ba"
},
{
"metadata": {},
"cell_type": "code",
"outputs": [],
"execution_count": null,
"source": [
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"class GetPopulation(BaseModel):\n",
" \"\"\"Get the current population in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm_with_tools = chat.bind_tools(\n",
" [GetWeather, GetPopulation]\n",
" # strict = True # enforce tool args schema is respected\n",
")"
],
"id": "135efb0bfc5916c1"
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T18:42:41.834079Z",
"start_time": "2025-04-21T18:42:40.289095Z"
}
},
"cell_type": "code",
"source": [
"ai_msg = llm_with_tools.invoke(\n",
" \"Which city is hotter today and which is bigger: LA or NY?\"\n",
")\n",
"ai_msg"
],
"id": "8136f19a8836cd58",
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'chatcmpl-tool-b1204a3c70b44cd8802579df48df0c8c', 'type': 'function', 'index': 0, 'function': {'name': 'GetWeather', 'arguments': '{\"location\": \"Los Angeles, CA\"}'}}, {'id': 'chatcmpl-tool-e299116c05bf4ce498cd6042928ae080', 'type': 'function', 'index': 0, 'function': {'name': 'GetWeather', 'arguments': '{\"location\": \"New York, NY\"}'}}, {'id': 'chatcmpl-tool-19502a60f30348669ffbac00ff503388', 'type': 'function', 'index': 0, 'function': {'name': 'GetPopulation', 'arguments': '{\"location\": \"Los Angeles, CA\"}'}}, {'id': 'chatcmpl-tool-4b8d56ef067f447795d9146a56e43510', 'type': 'function', 'index': 0, 'function': {'name': 'GetPopulation', 'arguments': '{\"location\": \"New York, NY\"}'}}]}, response_metadata={}, id='run-4630cfa9-4e95-42dd-8e4a-45db78180a10-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'Los Angeles, CA'}, 'id': 'chatcmpl-tool-b1204a3c70b44cd8802579df48df0c8c', 'type': 'tool_call'}, {'name': 'GetWeather', 'args': {'location': 'New York, NY'}, 'id': 'chatcmpl-tool-e299116c05bf4ce498cd6042928ae080', 'type': 'tool_call'}, {'name': 'GetPopulation', 'args': {'location': 'Los Angeles, CA'}, 'id': 'chatcmpl-tool-19502a60f30348669ffbac00ff503388', 'type': 'tool_call'}, {'name': 'GetPopulation', 'args': {'location': 'New York, NY'}, 'id': 'chatcmpl-tool-4b8d56ef067f447795d9146a56e43510', 'type': 'tool_call'}])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"execution_count": 7
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"### AIMessage.tool_calls\n",
"\n",
"Notice that the AIMessage has a tool_calls attribute. This contains in a standardized ToolCall format that is model-provider agnostic."
],
"id": "84f405c45a35abe5"
},
{
"metadata": {
"ExecuteTime": {
"end_time": "2025-04-21T18:43:00.429453Z",
"start_time": "2025-04-21T18:43:00.426399Z"
}
},
"cell_type": "code",
"source": "ai_msg.tool_calls",
"id": "bdcee85475019719",
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetWeather',\n",
" 'args': {'location': 'Los Angeles, CA'},\n",
" 'id': 'chatcmpl-tool-b1204a3c70b44cd8802579df48df0c8c',\n",
" 'type': 'tool_call'},\n",
" {'name': 'GetWeather',\n",
" 'args': {'location': 'New York, NY'},\n",
" 'id': 'chatcmpl-tool-e299116c05bf4ce498cd6042928ae080',\n",
" 'type': 'tool_call'},\n",
" {'name': 'GetPopulation',\n",
" 'args': {'location': 'Los Angeles, CA'},\n",
" 'id': 'chatcmpl-tool-19502a60f30348669ffbac00ff503388',\n",
" 'type': 'tool_call'},\n",
" {'name': 'GetPopulation',\n",
" 'args': {'location': 'New York, NY'},\n",
" 'id': 'chatcmpl-tool-4b8d56ef067f447795d9146a56e43510',\n",
" 'type': 'tool_call'}]"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"execution_count": 8
},
{
"cell_type": "markdown",
"id": "ff1b51a8",

View File

@@ -0,0 +1,284 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Qwen QwQ\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatQwQ\n",
"\n",
"This will help you getting started with QwQ [chat models](../../concepts/chat_models.mdx). For detailed documentation of all ChatQwQ features and configurations head to the [API reference](https://pypi.org/project/langchain-qwq/).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"\n",
"| Class | Package | Local | Serializable | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [ChatQwQ](https://pypi.org/project/langchain-qwq/) | [langchain-qwq](https://pypi.org/project/langchain-qwq/) | ❌ | beta | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-qwq?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-qwq?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ |❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access QwQ models you'll need to create an Alibaba Cloud account, get an API key, and install the `langchain-qwq` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [Alibaba's API Key page](https://account.alibabacloud.com/login/login.htm?oauth_callback=https%3A%2F%2Fbailian.console.alibabacloud.com%2F%3FapiKey%3D1&lang=en#/api-key) to sign up to Alibaba Cloud and generate an API key. Once you've done this set the `DASHSCOPE_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if not os.getenv(\"DASHSCOPE_API_KEY\"):\n",
" os.environ[\"DASHSCOPE_API_KEY\"] = getpass.getpass(\"Enter your Dashscope API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain QwQ integration lives in the `langchain-qwq` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-qwq"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_qwq import ChatQwQ\n",
"\n",
"llm = ChatQwQ(\n",
" model=\"qwq-plus\",\n",
" max_tokens=3_000,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'aime la programmation.\", additional_kwargs={'reasoning_content': 'Okay, the user wants me to translate \"I love programming.\" into French. Let\\'s start by breaking down the sentence. The subject is \"I\", which in French is \"Je\". The verb is \"love\", which in this context is present tense, so \"aime\". The object is \"programming\". Now, \"programming\" in French can be \"la programmation\". \\n\\nWait, should it be \"programmation\" or \"programmation\"? Let me confirm the spelling. Yes, \"programmation\" is correct. Now, putting it all together: \"Je aime la programmation.\" Hmm, but in French, there\\'s a tendency to contract \"je\" and \"aime\". Wait, actually, \"je\" followed by a vowel sound usually takes \"j\\'\". So it should be \"J\\'aime la programmation.\" \\n\\nLet me double-check. \"J\\'aime\" is the correct contraction for \"I love\". The definite article \"la\" is needed because \"programmation\" is a feminine noun. Yes, \"programmation\" is a feminine noun, so \"la\" is correct. \\n\\nIs there any other way to say it? Maybe \"J\\'adore la programmation\" for \"I love\" in a stronger sense, but the user didn\\'t specify the intensity. Since the original is straightforward, \"J\\'aime la programmation.\" is the direct translation. \\n\\nI think that\\'s it. No mistakes there. So the final translation should be \"J\\'aime la programmation.\"'}, response_metadata={'model_name': 'qwq-plus'}, id='run-5045cd6a-edbd-4b2f-bf24-b7bdf3777fb9-0', usage_metadata={'input_tokens': 32, 'output_tokens': 326, 'total_tokens': 358, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French.\"\n",
" \"Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](../../how_to/sequence.ipynb) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', additional_kwargs={'reasoning_content': 'Okay, the user wants me to translate \"I love programming.\" into German. Let me think. The verb \"love\" is \"lieben\" or \"mögen\" in German, but \"lieben\" is more like love, while \"mögen\" is prefer. Since it\\'s about programming, which is a strong affection, \"lieben\" is better. The subject is \"I\", which is \"ich\". Then \"programming\" is \"Programmierung\" or \"Coding\". But \"Programmierung\" is more formal. Alternatively, sometimes people say \"ich liebe es zu programmieren\" which is \"I love to program\". Hmm, maybe the direct translation would be \"Ich liebe die Programmierung.\" But maybe the more natural way is \"Ich liebe es zu programmieren.\" Let me check. Both are correct, but the second one might sound more natural in everyday speech. The user might prefer the concise version. Alternatively, maybe \"Ich liebe die Programmierung.\" is better. Wait, the original is \"programming\" as a noun. So using the noun form would be appropriate. So \"Ich liebe die Programmierung.\" But sometimes people also use \"Coding\" in German, like \"Ich liebe das Coding.\" But that\\'s more anglicism. Probably better to stick with \"Programmierung\". Alternatively, \"Programmieren\" as a noun. Oh right! \"Programmieren\" can be a noun when used in the accusative case. So \"Ich liebe das Programmieren.\" That\\'s correct and natural. Yes, that\\'s the best translation. So the answer is \"Ich liebe das Programmieren.\"'}, response_metadata={'model_name': 'qwq-plus'}, id='run-2c418451-51d8-4319-8269-2ce129363a1a-0', usage_metadata={'input_tokens': 28, 'output_tokens': 341, 'total_tokens': 369, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates\"\n",
" \"{input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "8d1b3ef3",
"metadata": {},
"source": [
"## Tool Calling\n",
"ChatQwQ supports tool calling API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool."
]
},
{
"cell_type": "markdown",
"id": "6db1a355",
"metadata": {},
"source": [
"### Use with `bind_tools`"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "15fb6a6d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='' additional_kwargs={'reasoning_content': 'Okay, the user is asking \"What\\'s 5 times forty two\". Let me break this down. First, I need to identify the numbers involved. The first number is 5, which is straightforward. The second number is forty two, which is 42 in digits. The operation they want is multiplication.\\n\\nLooking at the tools provided, there\\'s a function called multiply that takes two integers. So I should use that. The parameters are first_int and second_int. \\n\\nI need to convert \"forty two\" to 42. Since the function requires integers, both numbers should be in integer form. So 5 and 42. \\n\\nNow, I\\'ll structure the tool call. The function name is multiply, and the arguments should be first_int: 5 and second_int: 42. I\\'ll make sure the JSON is correctly formatted without any syntax errors. Let me double-check the parameters to ensure they\\'re required and of the right type. Yep, both are required and integers. \\n\\nNo examples were provided, but the function\\'s purpose is clear. So the correct tool call should be to multiply those two numbers. I think that\\'s all. No other functions are needed here.'} response_metadata={'model_name': 'qwq-plus'} id='run-638895aa-fdde-4567-bcfa-7d8e5d4f24af-0' tool_calls=[{'name': 'multiply', 'args': {'first_int': 5, 'second_int': 42}, 'id': 'call_d088275851c140529ed2ad', 'type': 'tool_call'}] usage_metadata={'input_tokens': 176, 'output_tokens': 277, 'total_tokens': 453, 'input_token_details': {}, 'output_token_details': {}}\n"
]
}
],
"source": [
"from langchain_core.tools import tool\n",
"from langchain_qwq import ChatQwQ\n",
"\n",
"\n",
"@tool\n",
"def multiply(first_int: int, second_int: int) -> int:\n",
" \"\"\"Multiply two integers together.\"\"\"\n",
" return first_int * second_int\n",
"\n",
"\n",
"llm = ChatQwQ()\n",
"\n",
"llm_with_tools = llm.bind_tools([multiply])\n",
"\n",
"msg = llm_with_tools.invoke(\"What's 5 times forty two\")\n",
"\n",
"print(msg)"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatQwQ features and configurations head to the [API reference](https://pypi.org/project/langchain-qwq/)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.13.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -0,0 +1,276 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# RunPod Chat Model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Get started with RunPod chat models.\n",
"\n",
"## Overview\n",
"\n",
"This guide covers how to use the LangChain `ChatRunPod` class to interact with chat models hosted on [RunPod Serverless](https://www.runpod.io/serverless-gpu)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"1. **Install the package:**\n",
" ```bash\n",
" pip install -qU langchain-runpod\n",
" ```\n",
"2. **Deploy a Chat Model Endpoint:** Follow the setup steps in the [RunPod Provider Guide](/docs/integrations/providers/runpod#setup) to deploy a compatible chat model endpoint on RunPod Serverless and get its Endpoint ID.\n",
"3. **Set Environment Variables:** Make sure `RUNPOD_API_KEY` and `RUNPOD_ENDPOINT_ID` (or a specific `RUNPOD_CHAT_ENDPOINT_ID`) are set."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"# Make sure environment variables are set (or pass them directly to ChatRunPod)\n",
"if \"RUNPOD_API_KEY\" not in os.environ:\n",
" os.environ[\"RUNPOD_API_KEY\"] = getpass.getpass(\"Enter your RunPod API Key: \")\n",
"\n",
"if \"RUNPOD_ENDPOINT_ID\" not in os.environ:\n",
" os.environ[\"RUNPOD_ENDPOINT_ID\"] = input(\n",
" \"Enter your RunPod Endpoint ID (used if RUNPOD_CHAT_ENDPOINT_ID is not set): \"\n",
" )\n",
"\n",
"# Optionally use a different endpoint ID specifically for chat models\n",
"# if \"RUNPOD_CHAT_ENDPOINT_ID\" not in os.environ:\n",
"# os.environ[\"RUNPOD_CHAT_ENDPOINT_ID\"] = input(\"Enter your RunPod Chat Endpoint ID (Optional): \")\n",
"\n",
"chat_endpoint_id = os.environ.get(\n",
" \"RUNPOD_CHAT_ENDPOINT_ID\", os.environ.get(\"RUNPOD_ENDPOINT_ID\")\n",
")\n",
"if not chat_endpoint_id:\n",
" raise ValueError(\n",
" \"No RunPod Endpoint ID found. Please set RUNPOD_ENDPOINT_ID or RUNPOD_CHAT_ENDPOINT_ID.\"\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Initialize the `ChatRunPod` class. You can pass model-specific parameters via `model_kwargs` and configure polling behavior."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_runpod import ChatRunPod\n",
"\n",
"chat = ChatRunPod(\n",
" runpod_endpoint_id=chat_endpoint_id, # Specify the correct endpoint ID\n",
" model_kwargs={\n",
" \"max_new_tokens\": 512,\n",
" \"temperature\": 0.7,\n",
" \"top_p\": 0.9,\n",
" # Add other parameters supported by your endpoint handler\n",
" },\n",
" # Optional: Adjust polling\n",
" # poll_interval=0.2,\n",
" # max_polling_attempts=150\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation\n",
"\n",
"Use the standard LangChain `.invoke()` and `.ainvoke()` methods to call the model. Streaming is also supported via `.stream()` and `.astream()` (simulated by polling the RunPod `/stream` endpoint)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"\n",
"messages = [\n",
" SystemMessage(content=\"You are a helpful AI assistant.\"),\n",
" HumanMessage(content=\"What is the RunPod Serverless API flow?\"),\n",
"]\n",
"\n",
"# Invoke (Sync)\n",
"try:\n",
" response = chat.invoke(messages)\n",
" print(\"--- Sync Invoke Response ---\")\n",
" print(response.content)\n",
"except Exception as e:\n",
" print(\n",
" f\"Error invoking Chat Model: {e}. Ensure endpoint ID/API key are correct and endpoint is active/compatible.\"\n",
" )\n",
"\n",
"# Stream (Sync, simulated via polling /stream)\n",
"print(\"\\n--- Sync Stream Response ---\")\n",
"try:\n",
" for chunk in chat.stream(messages):\n",
" print(chunk.content, end=\"\", flush=True)\n",
" print() # Newline\n",
"except Exception as e:\n",
" print(\n",
" f\"\\nError streaming Chat Model: {e}. Ensure endpoint handler supports streaming output format.\"\n",
" )\n",
"\n",
"### Async Usage\n",
"\n",
"# AInvoke (Async)\n",
"try:\n",
" async_response = await chat.ainvoke(messages)\n",
" print(\"--- Async Invoke Response ---\")\n",
" print(async_response.content)\n",
"except Exception as e:\n",
" print(f\"Error invoking Chat Model asynchronously: {e}.\")\n",
"\n",
"# AStream (Async)\n",
"print(\"\\n--- Async Stream Response ---\")\n",
"try:\n",
" async for chunk in chat.astream(messages):\n",
" print(chunk.content, end=\"\", flush=True)\n",
" print() # Newline\n",
"except Exception as e:\n",
" print(\n",
" f\"\\nError streaming Chat Model asynchronously: {e}. Ensure endpoint handler supports streaming output format.\\n\"\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"The chat model integrates seamlessly with LangChain Expression Language (LCEL) chains."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "plaintext"
}
},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You are a helpful assistant.\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"parser = StrOutputParser()\n",
"\n",
"chain = prompt | chat | parser\n",
"\n",
"try:\n",
" chain_response = chain.invoke(\n",
" {\"input\": \"Explain the concept of serverless computing in simple terms.\"}\n",
" )\n",
" print(\"--- Chain Response ---\")\n",
" print(chain_response)\n",
"except Exception as e:\n",
" print(f\"Error running chain: {e}\")\n",
"\n",
"\n",
"# Async chain\n",
"try:\n",
" async_chain_response = await chain.ainvoke(\n",
" {\"input\": \"What are the benefits of using RunPod for AI/ML workloads?\"}\n",
" )\n",
" print(\"--- Async Chain Response ---\")\n",
" print(async_chain_response)\n",
"except Exception as e:\n",
" print(f\"Error running async chain: {e}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Model Features (Endpoint Dependent)\n",
"\n",
"The availability of advanced features depends **heavily** on the specific implementation of your RunPod endpoint handler. The `ChatRunPod` integration provides the basic framework, but the handler must support the underlying functionality.\n",
"\n",
"| Feature | Integration Support | Endpoint Dependent? | Notes |\n",
"| :--------------------------------------------------------- | :-----------------: | :-----------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n",
"| [Tool calling](/docs/how_to/tool_calling) | ❌ | ✅ | Requires handler to process tool definitions and return tool calls (e.g., OpenAI format). Integration needs parsing logic. |\n",
"| [Structured output](/docs/how_to/structured_output) | ❌ | ✅ | Requires handler support for forcing structured output (JSON mode, function calling). Integration needs parsing logic. |\n",
"| JSON mode | ❌ | ✅ | Requires handler to accept a `json_mode` parameter (or similar) and guarantee JSON output. |\n",
"| [Image input](/docs/how_to/multimodal_inputs) | ❌ | ✅ | Requires multimodal handler accepting image data (e.g., base64). Integration does not support multimodal messages. |\n",
"| Audio input | ❌ | ✅ | Requires handler accepting audio data. Integration does not support audio messages. |\n",
"| Video input | ❌ | ✅ | Requires handler accepting video data. Integration does not support video messages. |\n",
"| [Token-level streaming](/docs/how_to/chat_streaming) | ✅ (Simulated) | ✅ | Polls `/stream`. Requires handler to populate `stream` list in status response with token chunks (e.g., `[{\"output\": \"token\"}]`). True low-latency streaming not built-in. |\n",
"| Native async | ✅ | ✅ | Core `ainvoke`/`astream` implemented. Relies on endpoint handler performance. |\n",
"| [Token usage](/docs/how_to/chat_token_usage_tracking) | ❌ | ✅ | Requires handler to return `prompt_tokens`, `completion_tokens` in the final response. Integration currently does not parse this. |\n",
"| [Logprobs](/docs/how_to/logprobs) | ❌ | ✅ | Requires handler to return log probabilities. Integration currently does not parse this. |\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Key Takeaway:** Standard chat invocation and simulated streaming work if the endpoint follows basic RunPod API conventions. Advanced features require specific handler implementations and potentially extending or customizing this integration package."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of the `ChatRunPod` class, parameters, and methods, refer to the source code or the generated API reference (if available).\n",
"\n",
"Link to source code: [https://github.com/runpod/langchain-runpod/blob/main/langchain_runpod/chat_models.py](https://github.com/runpod/langchain-runpod/blob/main/langchain_runpod/chat_models.py)"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,362 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "62d5a1ea",
"metadata": {},
"source": [
"# ChatSeekrFlow\n",
"\n",
"> [Seekr](https://www.seekr.com/) provides AI-powered solutions for structured, explainable, and transparent AI interactions.\n",
"\n",
"This notebook provides a quick overview for getting started with Seekr [chat models](/docs/concepts/chat_models). For detailed documentation of all `ChatSeekrFlow` features and configurations, head to the [API reference](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.seekrflow.ChatSeekrFlow.html).\n",
"\n",
"## Overview\n",
"\n",
"`ChatSeekrFlow` class wraps a chat model endpoint hosted on SeekrFlow, enabling seamless integration with LangChain applications.\n",
"\n",
"### Integration Details\n",
"\n",
"| Class | Package | Local | Serializable | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: |\n",
"| [ChatSeekrFlow](https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.seekrflow.ChatSeekrFlow.html) | [seekrai](https://python.langchain.com/docs/integrations/providers/seekr/) | ❌ | beta | ![PyPI - Downloads](https://img.shields.io/pypi/dm/seekrai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/seekrai?style=flat-square&label=%20) |\n",
"\n",
"### Model Features\n",
"\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ |\n",
"\n",
"### Supported Methods\n",
"`ChatSeekrFlow` supports all methods of `ChatModel`, **except async APIs**.\n",
"\n",
"### Endpoint Requirements\n",
"\n",
"The serving endpoint `ChatSeekrFlow` wraps **must** have OpenAI-compatible chat input/output format. It can be used for:\n",
"1. **Fine-tuned Seekr models**\n",
"2. **Custom SeekrFlow models**\n",
"3. **RAG-enabled models using Seekr's retrieval system**\n",
"\n",
"For async usage, please refer to `AsyncChatSeekrFlow` (coming soon).\n"
]
},
{
"cell_type": "markdown",
"id": "93fea471",
"metadata": {},
"source": [
"# Getting Started with ChatSeekrFlow in LangChain\n",
"\n",
"This notebook covers how to use SeekrFlow as a chat model in LangChain."
]
},
{
"cell_type": "markdown",
"id": "2f320c17",
"metadata": {},
"source": [
"## Setup\n",
"\n",
"Ensure you have the necessary dependencies installed:\n",
"\n",
"```bash\n",
"pip install seekrai langchain langchain-community\n",
"```\n",
"\n",
"You must also have an API key from Seekr to authenticate requests.\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "911ca53c",
"metadata": {},
"outputs": [],
"source": [
"# Standard library\n",
"import getpass\n",
"import os\n",
"\n",
"# Third-party\n",
"from langchain.prompts import ChatPromptTemplate\n",
"from langchain.schema import HumanMessage\n",
"from langchain_core.runnables import RunnableSequence\n",
"\n",
"# OSS SeekrFlow integration\n",
"from langchain_seekrflow import ChatSeekrFlow\n",
"from seekrai import SeekrFlow"
]
},
{
"cell_type": "markdown",
"id": "150461cb",
"metadata": {},
"source": [
"## API Key Setup\n",
"\n",
"You'll need to set your API key as an environment variable to authenticate requests.\n",
"\n",
"Run the below cell.\n",
"\n",
"Or manually assign it before running queries:\n",
"\n",
"```python\n",
"SEEKR_API_KEY = \"your-api-key-here\"\n",
"```\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "38afcd6e",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"SEEKR_API_KEY\"] = getpass.getpass(\"Enter your Seekr API key:\")"
]
},
{
"cell_type": "markdown",
"id": "82d83c0e",
"metadata": {},
"source": [
"## Instantiation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "71b14751",
"metadata": {},
"outputs": [],
"source": [
"os.environ[\"SEEKR_API_KEY\"]\n",
"seekr_client = SeekrFlow(api_key=SEEKR_API_KEY)\n",
"\n",
"llm = ChatSeekrFlow(\n",
" client=seekr_client, model_name=\"meta-llama/Meta-Llama-3-8B-Instruct\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "1046e86c",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f61a60f6",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Hello there! I'm Seekr, nice to meet you! What brings you here today? Do you have a question, or are you looking for some help with something? I'm all ears (or rather, all text)!\n"
]
}
],
"source": [
"response = llm.invoke([HumanMessage(content=\"Hello, Seekr!\")])\n",
"print(response.content)"
]
},
{
"cell_type": "markdown",
"id": "853b0349",
"metadata": {},
"source": [
"## Chaining"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "35fca3ec",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='The translation of \"Good morning\" in French is:\\n\\n\"Bonne journée\"' additional_kwargs={} response_metadata={}\n"
]
}
],
"source": [
"prompt = ChatPromptTemplate.from_template(\"Translate to French: {text}\")\n",
"\n",
"chain: RunnableSequence = prompt | llm\n",
"result = chain.invoke({\"text\": \"Good morning\"})\n",
"print(result)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a7b28b8d",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"🔹 Testing Sync `stream()` (Streaming)...\n",
"Here is a haiku:\n",
"\n",
"Golden sunset fades\n",
"Ripples on the quiet lake\n",
"Peaceful evening sky"
]
}
],
"source": [
"def test_stream():\n",
" \"\"\"Test synchronous invocation in streaming mode.\"\"\"\n",
" print(\"\\n🔹 Testing Sync `stream()` (Streaming)...\")\n",
"\n",
" for chunk in llm.stream([HumanMessage(content=\"Write me a haiku.\")]):\n",
" print(chunk.content, end=\"\", flush=True)\n",
"\n",
"\n",
"# ✅ Ensure streaming is enabled\n",
"llm = ChatSeekrFlow(\n",
" client=seekr_client,\n",
" model_name=\"meta-llama/Meta-Llama-3-8B-Instruct\",\n",
" streaming=True, # ✅ Enable streaming\n",
")\n",
"\n",
"# ✅ Run sync streaming test\n",
"test_stream()"
]
},
{
"cell_type": "markdown",
"id": "b3847b34",
"metadata": {},
"source": [
"## Error Handling & Debugging"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "6bc38b48",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Running test: Missing Client\n",
"✅ Expected Error: SeekrFlow client cannot be None.\n",
"Running test: Missing Model Name\n",
"✅ Expected Error: A valid model name must be provided.\n"
]
}
],
"source": [
"# Define a minimal mock SeekrFlow client\n",
"class MockSeekrClient:\n",
" \"\"\"Mock SeekrFlow API client that mimics the real API structure.\"\"\"\n",
"\n",
" class MockChat:\n",
" \"\"\"Mock Chat object with a completions method.\"\"\"\n",
"\n",
" class MockCompletions:\n",
" \"\"\"Mock Completions object with a create method.\"\"\"\n",
"\n",
" def create(self, *args, **kwargs):\n",
" return {\n",
" \"choices\": [{\"message\": {\"content\": \"Mock response\"}}]\n",
" } # Mimic API response\n",
"\n",
" completions = MockCompletions()\n",
"\n",
" chat = MockChat()\n",
"\n",
"\n",
"def test_initialization_errors():\n",
" \"\"\"Test that invalid ChatSeekrFlow initializations raise expected errors.\"\"\"\n",
"\n",
" test_cases = [\n",
" {\n",
" \"name\": \"Missing Client\",\n",
" \"args\": {\"client\": None, \"model_name\": \"seekrflow-model\"},\n",
" \"expected_error\": \"SeekrFlow client cannot be None.\",\n",
" },\n",
" {\n",
" \"name\": \"Missing Model Name\",\n",
" \"args\": {\"client\": MockSeekrClient(), \"model_name\": \"\"},\n",
" \"expected_error\": \"A valid model name must be provided.\",\n",
" },\n",
" ]\n",
"\n",
" for test in test_cases:\n",
" try:\n",
" print(f\"Running test: {test['name']}\")\n",
" faulty_llm = ChatSeekrFlow(**test[\"args\"])\n",
"\n",
" # If no error is raised, fail the test\n",
" print(f\"❌ Test '{test['name']}' failed: No error was raised!\")\n",
" except Exception as e:\n",
" error_msg = str(e)\n",
" assert test[\"expected_error\"] in error_msg, f\"Unexpected error: {error_msg}\"\n",
" print(f\"✅ Expected Error: {error_msg}\")\n",
"\n",
"\n",
"# Run test\n",
"test_initialization_errors()"
]
},
{
"cell_type": "markdown",
"id": "d1c9ddf3",
"metadata": {},
"source": [
"## API reference"
]
},
{
"cell_type": "markdown",
"id": "411a8bea",
"metadata": {},
"source": [
"- `ChatSeekrFlow` class: [`langchain_seekrflow.ChatSeekrFlow`](https://github.com/benfaircloth/langchain-seekrflow/blob/main/langchain_seekrflow/seekrflow.py)\n",
"- PyPI package: [`langchain-seekrflow`](https://pypi.org/project/langchain-seekrflow/)\n"
]
},
{
"cell_type": "markdown",
"id": "3ef00a51",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,267 +1,265 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Together\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatTogether\n",
"\n",
"\n",
"This page will help you get started with Together AI [chat models](../../concepts/chat_models.mdx). For detailed documentation of all ChatTogether features and configurations head to the [API reference](https://python.langchain.com/api_reference/together/chat_models/langchain_together.chat_models.ChatTogether.html).\n",
"\n",
"[Together AI](https://www.together.ai/) offers an API to query [50+ leading open-source models](https://docs.together.ai/docs/chat-models)\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/togetherai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatTogether](https://python.langchain.com/api_reference/together/chat_models/langchain_together.chat_models.ChatTogether.html) | [langchain-together](https://python.langchain.com/api_reference/together/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-together?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-together?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ | \n",
"\n",
"## Setup\n",
"\n",
"To access Together models you'll need to create a/an Together account, get an API key, and install the `langchain-together` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [this page](https://api.together.ai) to sign up to Together and generate an API key. Once you've done this set the TOGETHER_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"TOGETHER_API_KEY\" not in os.environ:\n",
" os.environ[\"TOGETHER_API_KEY\"] = getpass.getpass(\"Enter your Together API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Together integration lives in the `langchain-together` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-together"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_together import ChatTogether\n",
"\n",
"llm = ChatTogether(\n",
" model=\"meta-llama/Llama-3-70b-chat-hf\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
"cells": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 35, 'total_tokens': 44}, 'model_name': 'meta-llama/Llama-3-70b-chat-hf', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-eabcbe33-cdd8-45b8-ab0b-f90b6e7dfad8-0', usage_metadata={'input_tokens': 35, 'output_tokens': 9, 'total_tokens': 44})"
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Together\n",
"---"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](../../how_to/sequence.ipynb) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 7, 'prompt_tokens': 30, 'total_tokens': 37}, 'model_name': 'meta-llama/Llama-3-70b-chat-hf', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-a249aa24-ee31-46ba-9bf9-f4eb135b0a95-0', usage_metadata={'input_tokens': 30, 'output_tokens': 7, 'total_tokens': 37})"
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatTogether\n",
"\n",
"\n",
"This page will help you get started with Together AI [chat models](../../concepts/chat_models.mdx). For detailed documentation of all ChatTogether features and configurations head to the [API reference](https://python.langchain.com/api_reference/together/chat_models/langchain_together.chat_models.ChatTogether.html).\n",
"\n",
"[Together AI](https://www.together.ai/) offers an API to query [50+ leading open-source models](https://docs.together.ai/docs/chat-models)\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/togetherai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatTogether](https://python.langchain.com/api_reference/together/chat_models/langchain_together.chat_models.ChatTogether.html) | [langchain-together](https://python.langchain.com/api_reference/together/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-together?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-together?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ |\n",
"\n",
"## Setup\n",
"\n",
"To access Together models you'll need to create a/an Together account, get an API key, and install the `langchain-together` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [this page](https://api.together.ai) to sign up to Together and generate an API key. Once you've done this set the TOGETHER_API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"TOGETHER_API_KEY\" not in os.environ:\n",
" os.environ[\"TOGETHER_API_KEY\"] = getpass.getpass(\"Enter your Together API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain Together integration lives in the `langchain-together` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-together"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_together import ChatTogether\n",
"\n",
"llm = ChatTogether(\n",
" model=\"meta-llama/Llama-3-70b-chat-hf\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore la programmation.\", response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 35, 'total_tokens': 44}, 'model_name': 'meta-llama/Llama-3-70b-chat-hf', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-eabcbe33-cdd8-45b8-ab0b-f90b6e7dfad8-0', usage_metadata={'input_tokens': 35, 'output_tokens': 9, 'total_tokens': 44})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore la programmation.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](../../how_to/sequence.ipynb) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 7, 'prompt_tokens': 30, 'total_tokens': 37}, 'model_name': 'meta-llama/Llama-3-70b-chat-hf', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-a249aa24-ee31-46ba-9bf9-f4eb135b0a95-0', usage_metadata={'input_tokens': 30, 'output_tokens': 7, 'total_tokens': 37})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatTogether features and configurations head to the API reference: https://python.langchain.com/api_reference/together/chat_models/langchain_together.chat_models.ChatTogether.html"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatTogether features and configurations head to the API reference: https://python.langchain.com/api_reference/together/chat_models/langchain_together.chat_models.ChatTogether.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -5,21 +5,38 @@
"id": "134a0785",
"metadata": {},
"source": [
"# Vectara Chat\n",
"## Overview\n",
"\n",
"[Vectara](https://vectara.com/) is the trusted AI Assistant and Agent platform which focuses on enterprise readiness for mission-critical applications.\n",
"\n",
"Vectara serverless RAG-as-a-service provides all the components of RAG behind an easy-to-use API, including:\n",
"1. A way to extract text from files (PDF, PPT, DOCX, etc)\n",
"2. ML-based chunking that provides state of the art performance.\n",
"3. The [Boomerang](https://vectara.com/how-boomerang-takes-retrieval-augmented-generation-to-the-next-level-via-grounded-generation/) embeddings model.\n",
"4. Its own internal vector database where text chunks and embedding vectors are stored.\n",
"5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments (including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) as well as multiple reranking options such as the [multi-lingual relevance reranker](https://www.vectara.com/blog/deep-dive-into-vectara-multilingual-reranker-v1-state-of-the-art-reranker-across-100-languages), [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/), [UDF reranker](https://www.vectara.com/blog/rag-with-user-defined-functions-based-reranking). \n",
"5. A query service that automatically encodes the query into embedding, and retrieves the most relevant text segments, including support for [Hybrid Search](https://docs.vectara.com/docs/api-reference/search-apis/lexical-matching) as well as multiple reranking options such as the [multi-lingual relevance reranker](https://www.vectara.com/blog/deep-dive-into-vectara-multilingual-reranker-v1-state-of-the-art-reranker-across-100-languages), [MMR](https://vectara.com/get-diverse-results-and-comprehensive-summaries-with-vectaras-mmr-reranker/), [UDF reranker](https://www.vectara.com/blog/rag-with-user-defined-functions-based-reranking). \n",
"6. An LLM to for creating a [generative summary](https://docs.vectara.com/docs/learn/grounded-generation/grounded-generation-overview), based on the retrieved documents (context), including citations.\n",
"\n",
"See the [Vectara API documentation](https://docs.vectara.com/docs/) for more information on how to use the API.\n",
"For more information:\n",
"- [Documentation](https://docs.vectara.com/docs/)\n",
"- [API Playground](https://docs.vectara.com/docs/rest-api/)\n",
"- [Quickstart](https://docs.vectara.com/docs/quickstart)\n",
"\n",
"This notebook shows how to use Vectara's [Chat](https://docs.vectara.com/docs/api-reference/chat-apis/chat-apis-overview) functionality, which provides automatic storage of conversation history and ensures follow up questions consider that history."
"\n",
"This notebook shows how to use Vectara's [Chat](https://docs.vectara.com/docs/api-reference/chat-apis/chat-apis-overview) functionality, which provides automatic storage of conversation history and ensures follow up questions consider that history.\n",
"\n",
"### Setup\n",
"\n",
"To use the `VectaraVectorStore` you first need to install the partner package.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b4a2f525-4805-4880-8bfa-18fe6f1cd1c7",
"metadata": {},
"outputs": [],
"source": [
"!uv pip install -U pip && uv pip install -qU langchain-vectara"
]
},
{
@@ -27,17 +44,19 @@
"id": "56372c5b",
"metadata": {},
"source": [
"# Getting Started\n",
"## Getting Started\n",
"\n",
"To get started, use the following steps:\n",
"1. If you don't already have one, [Sign up](https://www.vectara.com/integrations/langchain) for your free Vectara trial. Once you have completed your sign up you will have a Vectara customer ID. You can find your customer ID by clicking on your name, on the top-right of the Vectara console window.\n",
"1. If you don't already have one, [Sign up](https://www.vectara.com/integrations/langchain) for your free Vectara trial.\n",
"2. Within your account you can create one or more corpora. Each corpus represents an area that stores text data upon ingest from input documents. To create a corpus, use the **\"Create Corpus\"** button. You then provide a name to your corpus as well as a description. Optionally you can define filtering attributes and apply some advanced options. If you click on your created corpus, you can see its name and corpus ID right on the top.\n",
"3. Next you'll need to create API keys to access the corpus. Click on the **\"Access Control\"** tab in the corpus view and then the **\"Create API Key\"** button. Give your key a name, and choose whether you want query-only or query+index for your key. Click \"Create\" and you now have an active API key. Keep this key confidential. \n",
"\n",
"To use LangChain with Vectara, you'll need to have these three values: `customer ID`, `corpus ID` and `api_key`.\n",
"You can provide those to LangChain in two ways:\n",
"To use LangChain with Vectara, you'll need to have these two values: `corpus_key` and `api_key`.\n",
"You can provide `VECTARA_API_KEY` to LangChain in two ways:\n",
"\n",
"1. Include in your environment these three variables: `VECTARA_CUSTOMER_ID`, `VECTARA_CORPUS_ID` and `VECTARA_API_KEY`.\n",
"## Instantiation\n",
"\n",
"1. Include in your environment these two variables: `VECTARA_API_KEY`.\n",
"\n",
" For example, you can set these variables using os.environ and getpass as follows:\n",
"\n",
@@ -45,8 +64,6 @@
"import os\n",
"import getpass\n",
"\n",
"os.environ[\"VECTARA_CUSTOMER_ID\"] = getpass.getpass(\"Vectara Customer ID:\")\n",
"os.environ[\"VECTARA_CORPUS_ID\"] = getpass.getpass(\"Vectara Corpus ID:\")\n",
"os.environ[\"VECTARA_API_KEY\"] = getpass.getpass(\"Vectara API Key:\")\n",
"```\n",
"\n",
@@ -54,17 +71,16 @@
"\n",
"```python\n",
"vectara = Vectara(\n",
" vectara_customer_id=vectara_customer_id,\n",
" vectara_corpus_id=vectara_corpus_id,\n",
" vectara_api_key=vectara_api_key\n",
" )\n",
" vectara_api_key=vectara_api_key\n",
")\n",
"```\n",
"\n",
"In this notebook we assume they are provided in the environment."
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 2,
"id": "70c4e529",
"metadata": {
"tags": []
@@ -73,14 +89,15 @@
"source": [
"import os\n",
"\n",
"os.environ[\"VECTARA_API_KEY\"] = \"<YOUR_VECTARA_API_KEY>\"\n",
"os.environ[\"VECTARA_CORPUS_ID\"] = \"<YOUR_VECTARA_CORPUS_ID>\"\n",
"os.environ[\"VECTARA_CUSTOMER_ID\"] = \"<YOUR_VECTARA_CUSTOMER_ID>\"\n",
"os.environ[\"VECTARA_API_KEY\"] = \"<VECTARA_API_KEY>\"\n",
"os.environ[\"VECTARA_CORPUS_KEY\"] = \"<VECTARA_CORPUS_KEY>\"\n",
"\n",
"from langchain_community.vectorstores import Vectara\n",
"from langchain_community.vectorstores.vectara import (\n",
" RerankConfig,\n",
" SummaryConfig,\n",
"from langchain_vectara import Vectara\n",
"from langchain_vectara.vectorstores import (\n",
" CorpusConfig,\n",
" GenerationConfig,\n",
" MmrReranker,\n",
" SearchConfig,\n",
" VectaraQueryConfig,\n",
")"
]
@@ -101,7 +118,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 3,
"id": "01c46e92",
"metadata": {
"tags": []
@@ -110,10 +127,11 @@
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"\n",
"loader = TextLoader(\"state_of_the_union.txt\")\n",
"loader = TextLoader(\"../document_loaders/example_data/state_of_the_union.txt\")\n",
"documents = loader.load()\n",
"\n",
"vectara = Vectara.from_documents(documents, embedding=None)"
"corpus_key = os.getenv(\"VECTARA_CORPUS_KEY\")\n",
"vectara = Vectara.from_documents(documents, embedding=None, corpus_key=corpus_key)"
]
},
{
@@ -126,18 +144,29 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 4,
"id": "1b41a10b-bf68-4689-8f00-9aed7675e2ab",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"summary_config = SummaryConfig(is_enabled=True, max_results=7, response_lang=\"eng\")\n",
"rerank_config = RerankConfig(reranker=\"mmr\", rerank_k=50, mmr_diversity_bias=0.2)\n",
"config = VectaraQueryConfig(\n",
" k=10, lambda_val=0.005, rerank_config=rerank_config, summary_config=summary_config\n",
"generation_config = GenerationConfig(\n",
" max_used_search_results=7,\n",
" response_language=\"eng\",\n",
" generation_preset_name=\"vectara-summary-ext-24-05-med-omni\",\n",
" enable_factual_consistency_score=True,\n",
")\n",
"search_config = SearchConfig(\n",
" corpora=[CorpusConfig(corpus_key=corpus_key, limit=25)],\n",
" reranker=MmrReranker(diversity_bias=0.2),\n",
")\n",
"\n",
"config = VectaraQueryConfig(\n",
" search=search_config,\n",
" generation=generation_config,\n",
")\n",
"\n",
"\n",
"bot = vectara.as_chat(config)"
]
@@ -147,12 +176,15 @@
"id": "83f38c18-ac82-45f4-a79e-8b37ce1ae115",
"metadata": {},
"source": [
"\n",
"## Invocation\n",
"\n",
"Here's an example of asking a question with no chat history"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "bc672290-8a8b-4828-a90c-f1bbdd6b3920",
"metadata": {
"tags": []
@@ -161,10 +193,10 @@
{
"data": {
"text/plain": [
"'The President expressed gratitude to Justice Breyer and highlighted the significance of nominating Ketanji Brown Jackson to the Supreme Court, praising her legal expertise and commitment to upholding excellence [1]. The President also reassured the public about the situation with gas prices and the conflict in Ukraine, emphasizing unity with allies and the belief that the world will emerge stronger from these challenges [2][4]. Additionally, the President shared personal experiences related to economic struggles and the importance of passing the American Rescue Plan to support those in need [3]. The focus was also on job creation and economic growth, acknowledging the impact of inflation on families [5]. While addressing cancer as a significant issue, the President discussed plans to enhance cancer research and support for patients and families [7].'"
"'The president stated that nominating someone to serve on the United States Supreme Court is one of the most serious constitutional responsibilities. He nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, describing her as one of the nations top legal minds who will continue Justice Breyers legacy of excellence and noting her experience as a former top litigator in private practice [1].'"
]
},
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
@@ -183,7 +215,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 6,
"id": "9c95460b-7116-4155-a9d2-c0fb027ee592",
"metadata": {
"tags": []
@@ -192,10 +224,10 @@
{
"data": {
"text/plain": [
"\"In his remarks, the President specified that Ketanji Brown Jackson is succeeding Justice Breyer on the United States Supreme Court[1]. The President praised Jackson as a top legal mind who will continue Justice Breyer's legacy of excellence. The nomination of Jackson was highlighted as a significant constitutional responsibility of the President[1]. The President emphasized the importance of this nomination and the qualities that Jackson brings to the role. The focus was on the transition from Justice Breyer to Judge Ketanji Brown Jackson on the Supreme Court[1].\""
"'Yes, the president mentioned that Ketanji Brown Jackson succeeded Justice Breyer [1].'"
]
},
"execution_count": 5,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
@@ -217,7 +249,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 7,
"id": "936dc62f",
"metadata": {
"tags": []
@@ -227,14 +259,14 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Judge Ketanji Brown Jackson is a nominee for the United States Supreme Court, known for her legal expertise and experience as a former litigator. She is praised for her potential to continue the legacy of excellence on the Court[1]. While the search results provide information on various topics like innovation, economic growth, and healthcare initiatives, they do not directly address Judge Ketanji Brown Jackson's specific accomplishments. Therefore, I do not have enough information to answer this question."
"The president acknowledged the significant impact of COVID-19 on the nation, expressing understanding of the public's fatigue and frustration. He emphasized the need to view COVID-19 not as a partisan issue but as a serious disease, urging unity among Americans. The president highlighted the progress made, noting that severe cases have decreased significantly, and mentioned new CDC guidelines allowing most Americans to be mask-free. He also pointed out the efforts to vaccinate the nation and provide economic relief, and the ongoing commitment to vaccinate the world [2], [3], [5]."
]
}
],
"source": [
"output = {}\n",
"curr_key = None\n",
"for chunk in bot.stream(\"what about her accopmlishments?\"):\n",
"for chunk in bot.stream(\"what did he said about the covid?\"):\n",
" for key in chunk:\n",
" if key not in output:\n",
" output[key] = chunk[key]\n",
@@ -244,6 +276,83 @@
" print(chunk[key], end=\"\", flush=True)\n",
" curr_key = key"
]
},
{
"cell_type": "markdown",
"id": "cefdf72b1d90085a",
"metadata": {
"collapsed": false
},
"source": [
"## Chaining\n",
"\n",
"For additional capabilities you can use chaining."
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "167bc806-395e-46bf-80cc-3c5d43164f42",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"So, the president talked about how the COVID-19 sickness has affected a lot of people in the country. He said that it's important for everyone to work together to fight the sickness, no matter what political party they are in. The president also mentioned that they are working hard to give vaccines to people to help protect them from getting sick. They are also giving money and help to people who need it, like food, housing, and cheaper health insurance. The president also said that they are sending vaccines to many other countries to help people all around the world stay healthy.\n"
]
}
],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai.chat_models import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(temperature=0)\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that explains the stuff to a five year old. Vectara is providing the answer.\",\n",
" ),\n",
" (\"human\", \"{vectara_response}\"),\n",
" ]\n",
")\n",
"\n",
"\n",
"def get_vectara_response(question: dict) -> str:\n",
" \"\"\"\n",
" Calls Vectara as_chat and returns the answer string. This encapsulates\n",
" the Vectara call.\n",
" \"\"\"\n",
" try:\n",
" response = bot.invoke(question[\"question\"])\n",
" return response[\"answer\"]\n",
" except Exception as e:\n",
" return \"I'm sorry, I couldn't get an answer from Vectara.\"\n",
"\n",
"\n",
"# Create the chain\n",
"chain = get_vectara_response | prompt | llm | StrOutputParser()\n",
"\n",
"\n",
"# Invoke the chain\n",
"result = chain.invoke({\"question\": \"what did he say about the covid?\"})\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"id": "3b8bb761-db4a-436c-8939-41e9f8652083",
"metadata": {
"collapsed": false
},
"source": [
"## API reference\n",
"\n",
"You can look at the [Chat](https://docs.vectara.com/docs/api-reference/chat-apis/chat-apis-overview) documentation for the details."
]
}
],
"metadata": {
@@ -262,7 +371,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.8"
"version": "3.12.0"
}
},
"nbformat": 4,

View File

@@ -1,244 +1,242 @@
{
"cells": [
{
"cell_type": "raw",
"id": "eb65deaa",
"metadata": {},
"source": [
"---\n",
"sidebar_label: vLLM Chat\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "8f82e243-f4ee-44e2-b417-099b6401ae3e",
"metadata": {},
"source": [
"# vLLM Chat\n",
"\n",
"vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API.\n",
"\n",
"## Overview\n",
"This will help you getting started with vLLM [chat models](/docs/concepts/chat_models), which leverage the `langchain-openai` package. For detailed documentation of all `ChatOpenAI` features and configurations head to the [API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html).\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOpenAI](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [langchain_openai](https://python.langchain.com/api_reference/openai/) | ✅ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_openai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_openai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"Specific model features-- such as tool calling, support for multi-modal inputs, support for token-level streaming, etc.-- will depend on the hosted model.\n",
"\n",
"## Setup\n",
"\n",
"See the vLLM docs [here](https://docs.vllm.ai/en/latest/).\n",
"\n",
"To access vLLM models through LangChain, you'll need to install the `langchain-openai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Authentication will depend on specifics of the inference server."
]
},
{
"cell_type": "markdown",
"id": "c3b1707a-cf2c-4367-94e3-436c43402503",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1e40bd5e-cbaa-41ef-aaf9-0858eb207184",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0739b647-609b-46d3-bdd3-e86fe4463288",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain vLLM integration can be accessed via the `langchain-openai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7afcfbdc-56aa-4529-825a-8acbe7aa5241",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "2cf576d6-7b67-4937-bf99-39071e85720c",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "060a2e3d-d42f-4221-bd09-a9a06544dcd3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
")\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "bf24d732-68a9-44fd-b05d-4903ce5620c6",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"inference_server_url = \"http://localhost:8000/v1\"\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"mosaicml/mpt-7b\",\n",
" openai_api_key=\"EMPTY\",\n",
" openai_api_base=inference_server_url,\n",
" max_tokens=5,\n",
" temperature=0,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "34b18328-5e8b-4ff2-9b89-6fbb76b5c7f0",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "aea4e363-5688-4b07-82ed-6aa8153c2377",
"metadata": {
"tags": []
},
"outputs": [
"cells": [
{
"data": {
"text/plain": [
"AIMessage(content=' Io amo programmare', additional_kwargs={}, example=False)"
"cell_type": "raw",
"id": "eb65deaa",
"metadata": {},
"source": [
"---\n",
"sidebar_label: vLLM Chat\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "8f82e243-f4ee-44e2-b417-099b6401ae3e",
"metadata": {},
"source": [
"# vLLM Chat\n",
"\n",
"vLLM can be deployed as a server that mimics the OpenAI API protocol. This allows vLLM to be used as a drop-in replacement for applications using OpenAI API. This server can be queried in the same format as OpenAI API.\n",
"\n",
"## Overview\n",
"This will help you getting started with vLLM [chat models](/docs/concepts/chat_models), which leverage the `langchain-openai` package. For detailed documentation of all `ChatOpenAI` features and configurations head to the [API reference](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html).\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatOpenAI](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) | [langchain_openai](https://python.langchain.com/api_reference/openai/) | ✅ | beta | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_openai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_openai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"Specific model features-- such as tool calling, support for multi-modal inputs, support for token-level streaming, etc.-- will depend on the hosted model.\n",
"\n",
"## Setup\n",
"\n",
"See the vLLM docs [here](https://docs.vllm.ai/en/latest/).\n",
"\n",
"To access vLLM models through LangChain, you'll need to install the `langchain-openai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Authentication will depend on specifics of the inference server."
]
},
{
"cell_type": "markdown",
"id": "c3b1707a-cf2c-4367-94e3-436c43402503",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": null,
"id": "1e40bd5e-cbaa-41ef-aaf9-0858eb207184",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\"\n",
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")"
]
},
{
"cell_type": "markdown",
"id": "0739b647-609b-46d3-bdd3-e86fe4463288",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain vLLM integration can be accessed via the `langchain-openai` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7afcfbdc-56aa-4529-825a-8acbe7aa5241",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]
},
{
"cell_type": "markdown",
"id": "2cf576d6-7b67-4937-bf99-39071e85720c",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "060a2e3d-d42f-4221-bd09-a9a06544dcd3",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from langchain_core.prompts.chat import (\n",
" ChatPromptTemplate,\n",
" HumanMessagePromptTemplate,\n",
" SystemMessagePromptTemplate,\n",
")\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "bf24d732-68a9-44fd-b05d-4903ce5620c6",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"inference_server_url = \"http://localhost:8000/v1\"\n",
"\n",
"llm = ChatOpenAI(\n",
" model=\"mosaicml/mpt-7b\",\n",
" openai_api_key=\"EMPTY\",\n",
" openai_api_base=inference_server_url,\n",
" max_tokens=5,\n",
" temperature=0,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "34b18328-5e8b-4ff2-9b89-6fbb76b5c7f0",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "aea4e363-5688-4b07-82ed-6aa8153c2377",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' Io amo programmare', additional_kwargs={}, example=False)"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" SystemMessage(\n",
" content=\"You are a helpful assistant that translates English to Italian.\"\n",
" ),\n",
" HumanMessage(\n",
" content=\"Translate the following sentence from English to Italian: I love programming.\"\n",
" ),\n",
"]\n",
"llm.invoke(messages)"
]
},
{
"cell_type": "markdown",
"id": "a580a1e4-11a3-4277-bfba-bfb414ac7201",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dd0f4043-48bd-4245-8bdb-e7669666a277",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "265f5d51-0a76-4808-8d13-ef598ee6e366",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all features and configurations exposed via `langchain-openai`, head to the API reference: https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html\n",
"\n",
"Refer to the vLLM [documentation](https://docs.vllm.ai/en/latest/) as well."
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" SystemMessage(\n",
" content=\"You are a helpful assistant that translates English to Italian.\"\n",
" ),\n",
" HumanMessage(\n",
" content=\"Translate the following sentence from English to Italian: I love programming.\"\n",
" ),\n",
"]\n",
"llm.invoke(messages)"
]
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
{
"cell_type": "markdown",
"id": "a580a1e4-11a3-4277-bfba-bfb414ac7201",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dd0f4043-48bd-4245-8bdb-e7669666a277",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "265f5d51-0a76-4808-8d13-ef598ee6e366",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all features and configurations exposed via `langchain-openai`, head to the API reference: https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html\n",
"\n",
"Refer to the vLLM [documentation](https://docs.vllm.ai/en/latest/) as well."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,332 +1,330 @@
{
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: xAI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatXAI\n",
"\n",
"\n",
"This page will help you get started with xAI [chat models](../../concepts/chat_models.mdx). For detailed documentation of all `ChatXAI` features and configurations head to the [API reference](https://python.langchain.com/api_reference/xai/chat_models/langchain_xai.chat_models.ChatXAI.html).\n",
"\n",
"[xAI](https://console.x.ai/) offers an API to interact with Grok models.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/xai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatXAI](https://python.langchain.com/api_reference/xai/chat_models/langchain_xai.chat_models.ChatXAI.html) | [langchain-xai](https://python.langchain.com/api_reference/xai/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-xai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-xai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | ✅ | ✅ | \n",
"\n",
"## Setup\n",
"\n",
"To access xAI models you'll need to create an xAI account, get an API key, and install the `langchain-xai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [this page](https://console.x.ai/) to sign up for xAI and generate an API key. Once you've done this set the `XAI_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"XAI_API_KEY\" not in os.environ:\n",
" os.environ[\"XAI_API_KEY\"] = getpass.getpass(\"Enter your xAI API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain xAI integration lives in the `langchain-xai` package:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-xai"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_xai import ChatXAI\n",
"\n",
"llm = ChatXAI(\n",
" model=\"grok-beta\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
"cells": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer.\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 30, 'total_tokens': 36, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'grok-beta', 'system_fingerprint': 'fp_14b89b2dfc', 'finish_reason': 'stop', 'logprobs': None}, id='run-adffb7a3-e48a-4f52-b694-340d85abe5c3-0', usage_metadata={'input_tokens': 30, 'output_tokens': 6, 'total_tokens': 36, 'input_token_details': {}, 'output_token_details': {}})"
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: xAI\n",
"---"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](../../how_to/sequence.ipynb) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 7, 'prompt_tokens': 25, 'total_tokens': 32, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'grok-beta', 'system_fingerprint': 'fp_14b89b2dfc', 'finish_reason': 'stop', 'logprobs': None}, id='run-569fc8dc-101b-4e6d-864e-d4fa80df2b63-0', usage_metadata={'input_tokens': 25, 'output_tokens': 7, 'total_tokens': 32, 'input_token_details': {}, 'output_token_details': {}})"
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatXAI\n",
"\n",
"\n",
"This page will help you get started with xAI [chat models](../../concepts/chat_models.mdx). For detailed documentation of all `ChatXAI` features and configurations head to the [API reference](https://python.langchain.com/api_reference/xai/chat_models/langchain_xai.chat_models.ChatXAI.html).\n",
"\n",
"[xAI](https://console.x.ai/) offers an API to interact with Grok models.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/docs/integrations/chat/xai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatXAI](https://python.langchain.com/api_reference/xai/chat_models/langchain_xai.chat_models.ChatXAI.html) | [langchain-xai](https://python.langchain.com/api_reference/xai/index.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-xai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-xai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](../../how_to/tool_calling.ipynb) | [Structured output](../../how_to/structured_output.ipynb) | JSON mode | [Image input](../../how_to/multimodal_inputs.ipynb) | Audio input | Video input | [Token-level streaming](../../how_to/chat_streaming.ipynb) | Native async | [Token usage](../../how_to/chat_token_usage_tracking.ipynb) | [Logprobs](../../how_to/logprobs.ipynb) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | ✅ | ✅ |\n",
"\n",
"## Setup\n",
"\n",
"To access xAI models you'll need to create an xAI account, get an API key, and install the `langchain-xai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [this page](https://console.x.ai/) to sign up for xAI and generate an API key. Once you've done this set the `XAI_API_KEY` environment variable:"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e074bce1-0994-4b83-b393-ae7aa7e21750",
"metadata": {},
"source": [
"## Tool calling\n",
"\n",
"ChatXAI has a [tool calling](https://docs.x.ai/docs#capabilities) (we use \"tool calling\" and \"function calling\" interchangeably here) API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool. Tool-calling is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally.\n",
"\n",
"### ChatXAI.bind_tools()\n",
"\n",
"With `ChatXAI.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an OpenAI tool schemas, which looks like:\n",
"```\n",
"{\n",
" \"name\": \"...\",\n",
" \"description\": \"...\",\n",
" \"parameters\": {...} # JSONSchema\n",
"}\n",
"```\n",
"and passed in every model invocation."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c6bfe929-ec02-46bd-9d54-76350edddabc",
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm_with_tools = llm.bind_tools([GetWeather])"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "5265c892-d8c2-48af-aef5-adbee1647ba6",
"metadata": {},
"outputs": [
},
{
"data": {
"text/plain": [
"AIMessage(content='I am retrieving the current weather for San Francisco.', additional_kwargs={'tool_calls': [{'id': '0', 'function': {'arguments': '{\"location\":\"San Francisco, CA\"}', 'name': 'GetWeather'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 151, 'total_tokens': 162, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'grok-beta', 'system_fingerprint': 'fp_14b89b2dfc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-73707da7-afec-4a52-bee1-a176b0ab8585-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': '0', 'type': 'tool_call'}], usage_metadata={'input_tokens': 151, 'output_tokens': 11, 'total_tokens': 162, 'input_token_details': {}, 'output_token_details': {}})"
"cell_type": "code",
"execution_count": null,
"id": "433e8d2b-9519-4b49-b2c4-7ab65b046c94",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"XAI_API_KEY\" not in os.environ:\n",
" os.environ[\"XAI_API_KEY\"] = getpass.getpass(\"Enter your xAI API key: \")"
]
},
{
"cell_type": "markdown",
"id": "72ee0c4b-9764-423a-9dbf-95129e185210",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain xAI integration lives in the `langchain-xai` package:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-xai"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_xai import ChatXAI\n",
"\n",
"llm = ChatXAI(\n",
" model=\"grok-beta\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" timeout=None,\n",
" max_retries=2,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'adore programmer.\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 6, 'prompt_tokens': 30, 'total_tokens': 36, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'grok-beta', 'system_fingerprint': 'fp_14b89b2dfc', 'finish_reason': 'stop', 'logprobs': None}, id='run-adffb7a3-e48a-4f52-b694-340d85abe5c3-0', usage_metadata={'input_tokens': 30, 'output_tokens': 6, 'total_tokens': 36, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer.\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](../../how_to/sequence.ipynb) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 7, 'prompt_tokens': 25, 'total_tokens': 32, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'grok-beta', 'system_fingerprint': 'fp_14b89b2dfc', 'finish_reason': 'stop', 'logprobs': None}, id='run-569fc8dc-101b-4e6d-864e-d4fa80df2b63-0', usage_metadata={'input_tokens': 25, 'output_tokens': 7, 'total_tokens': 32, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e074bce1-0994-4b83-b393-ae7aa7e21750",
"metadata": {},
"source": [
"## Tool calling\n",
"\n",
"ChatXAI has a [tool calling](https://docs.x.ai/docs#capabilities) (we use \"tool calling\" and \"function calling\" interchangeably here) API that lets you describe tools and their arguments, and have the model return a JSON object with a tool to invoke and the inputs to that tool. Tool-calling is extremely useful for building tool-using chains and agents, and for getting structured outputs from models more generally.\n",
"\n",
"### ChatXAI.bind_tools()\n",
"\n",
"With `ChatXAI.bind_tools`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to an OpenAI tool schemas, which looks like:\n",
"```\n",
"{\n",
" \"name\": \"...\",\n",
" \"description\": \"...\",\n",
" \"parameters\": {...} # JSONSchema\n",
"}\n",
"```\n",
"and passed in every model invocation."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c6bfe929-ec02-46bd-9d54-76350edddabc",
"metadata": {},
"outputs": [],
"source": [
"from pydantic import BaseModel, Field\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm_with_tools = llm.bind_tools([GetWeather])"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "5265c892-d8c2-48af-aef5-adbee1647ba6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='I am retrieving the current weather for San Francisco.', additional_kwargs={'tool_calls': [{'id': '0', 'function': {'arguments': '{\"location\":\"San Francisco, CA\"}', 'name': 'GetWeather'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 11, 'prompt_tokens': 151, 'total_tokens': 162, 'completion_tokens_details': None, 'prompt_tokens_details': None}, 'model_name': 'grok-beta', 'system_fingerprint': 'fp_14b89b2dfc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-73707da7-afec-4a52-bee1-a176b0ab8585-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': '0', 'type': 'tool_call'}], usage_metadata={'input_tokens': 151, 'output_tokens': 11, 'total_tokens': 162, 'input_token_details': {}, 'output_token_details': {}})"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ai_msg = llm_with_tools.invoke(\n",
" \"what is the weather like in San Francisco\",\n",
")\n",
"ai_msg"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `ChatXAI` features and configurations head to the API reference: https://python.langchain.com/api_reference/xai/chat_models/langchain_xai.chat_models.ChatXAI.html"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ai_msg = llm_with_tools.invoke(\n",
" \"what is the weather like in San Francisco\",\n",
")\n",
"ai_msg"
]
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all `ChatXAI` features and configurations head to the API reference: https://python.langchain.com/api_reference/xai/chat_models/langchain_xai.chat_models.ChatXAI.html"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -1,229 +1,227 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# ChatYI\n",
"\n",
"This will help you getting started with Yi [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatYi features and configurations head to the [API reference](https://python.langchain.com/api_reference/lanchain_community/chat_models/lanchain_community.chat_models.yi.ChatYi.html).\n",
"\n",
"[01.AI](https://www.lingyiwanwu.com/en), founded by Dr. Kai-Fu Lee, is a global company at the forefront of AI 2.0. They offer cutting-edge large language models, including the Yi series, which range from 6B to hundreds of billions of parameters. 01.AI also provides multimodal models, an open API platform, and open-source options like Yi-34B/9B/6B and Yi-VL.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatYi](https://python.langchain.com/api_reference/lanchain_community/chat_models/lanchain_community.chat_models.yi.ChatYi.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access ChatYi models you'll need to create a/an 01.AI account, get an API key, and install the `langchain_community` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [01.AI](https://platform.01.ai) to sign up to 01.AI and generate an API key. Once you've done this set the `YI_API_KEY` environment variable:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"YI_API_KEY\" not in os.environ:\n",
" os.environ[\"YI_API_KEY\"] = getpass.getpass(\"Enter your Yi API key: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain __ModuleName__ integration lives in the `langchain_community` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n",
"\n",
"- TODO: Update model instantiation with relevant params."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.yi import ChatYi\n",
"\n",
"llm = ChatYi(\n",
" model=\"yi-large\",\n",
" temperature=0,\n",
" timeout=60,\n",
" yi_api_base=\"https://api.01.ai/v1/chat/completions\",\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
"cells": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Large Language Models (LLMs) have the potential to significantly impact healthcare by enhancing various aspects of patient care, research, and administrative processes. Here are some potential applications:\\n\\n1. **Clinical Documentation and Reporting**: LLMs can assist in generating patient reports and documentation by understanding and summarizing clinical notes, making the process more efficient and reducing the administrative burden on healthcare professionals.\\n\\n2. **Medical Coding and Billing**: These models can help in automating the coding process for medical billing by accurately translating clinical notes into standardized codes, reducing errors and improving billing efficiency.\\n\\n3. **Clinical Decision Support**: LLMs can analyze patient data and medical literature to provide evidence-based recommendations to healthcare providers, aiding in diagnosis and treatment planning.\\n\\n4. **Patient Education and Communication**: By simplifying medical jargon, LLMs can help in educating patients about their conditions, treatment options, and preventive care, improving patient engagement and health literacy.\\n\\n5. **Natural Language Processing (NLP) for EHRs**: LLMs can enhance NLP capabilities in Electronic Health Records (EHRs) systems, enabling better extraction of information from unstructured data, such as clinical notes, to support data-driven decision-making.\\n\\n6. **Drug Discovery and Development**: LLMs can analyze biomedical literature and clinical trial data to identify new drug candidates, predict drug interactions, and support the development of personalized medicine.\\n\\n7. **Telemedicine and Virtual Health Assistants**: Integrated into telemedicine platforms, LLMs can provide preliminary assessments and triage, offering patients basic health advice and determining the urgency of their needs, thus optimizing the utilization of healthcare resources.\\n\\n8. **Research and Literature Review**: LLMs can expedite the process of reviewing medical literature by quickly identifying relevant studies and summarizing findings, accelerating research and evidence-based practice.\\n\\n9. **Personalized Medicine**: By analyzing a patient's genetic information and medical history, LLMs can help in tailoring treatment plans and medication dosages, contributing to the advancement of personalized medicine.\\n\\n10. **Quality Improvement and Risk Assessment**: LLMs can analyze healthcare data to identify patterns that may indicate areas for quality improvement or potential risks, such as hospital-acquired infections or adverse drug events.\\n\\n11. **Mental Health Support**: LLMs can provide mental health support by offering coping strategies, mindfulness exercises, and preliminary assessments, serving as a complement to professional mental health services.\\n\\n12. **Continuing Medical Education (CME)**: LLMs can personalize CME by recommending educational content based on a healthcare provider's practice area, patient demographics, and emerging medical literature, ensuring that professionals stay updated with the latest advancements.\\n\\nWhile the applications of LLMs in healthcare are promising, it's crucial to address challenges such as data privacy, model bias, and the need for regulatory approval to ensure that these technologies are implemented safely and ethically.\", response_metadata={'token_usage': {'completion_tokens': 656, 'prompt_tokens': 40, 'total_tokens': 696}, 'model': 'yi-large'}, id='run-870850bd-e4bf-4265-8730-1736409c0acf-0')"
"cell_type": "markdown",
"metadata": {},
"source": [
"# ChatYI\n",
"\n",
"This will help you getting started with Yi [chat models](/docs/concepts/chat_models). For detailed documentation of all ChatYi features and configurations head to the [API reference](https://python.langchain.com/api_reference/lanchain_community/chat_models/lanchain_community.chat_models.yi.ChatYi.html).\n",
"\n",
"[01.AI](https://www.lingyiwanwu.com/en), founded by Dr. Kai-Fu Lee, is a global company at the forefront of AI 2.0. They offer cutting-edge large language models, including the Yi series, which range from 6B to hundreds of billions of parameters. 01.AI also provides multimodal models, an open API platform, and open-source options like Yi-34B/9B/6B and Yi-VL.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"\n",
"| Class | Package | Local | Serializable | JS support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatYi](https://python.langchain.com/api_reference/lanchain_community/chat_models/lanchain_community.chat_models.yi.ChatYi.html) | [langchain_community](https://python.langchain.com/api_reference/community/index.html) | ✅ | ❌ | ❌ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain_community?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain_community?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ❌ | ❌ | ❌ | ✅ | ❌ | ❌ | ✅ | ❌ | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"To access ChatYi models you'll need to create a/an 01.AI account, get an API key, and install the `langchain_community` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [01.AI](https://platform.01.ai) to sign up to 01.AI and generate an API key. Once you've done this set the `YI_API_KEY` environment variable:"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"\n",
"messages = [\n",
" SystemMessage(content=\"You are an AI assistant specializing in technology trends.\"),\n",
" HumanMessage(\n",
" content=\"What are the potential applications of large language models in healthcare?\"\n",
" ),\n",
"]\n",
"\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
},
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 33, 'total_tokens': 41}, 'model': 'yi-large'}, id='run-daa3bc58-8289-4d72-a24e-80622fa90d6d-0')"
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"if \"YI_API_KEY\" not in os.environ:\n",
" os.environ[\"YI_API_KEY\"] = getpass.getpass(\"Enter your Yi API key: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"The LangChain __ModuleName__ integration lives in the `langchain_community` package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:\n",
"\n",
"- TODO: Update model instantiation with relevant params."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models.yi import ChatYi\n",
"\n",
"llm = ChatYi(\n",
" model=\"yi-large\",\n",
" temperature=0,\n",
" timeout=60,\n",
" yi_api_base=\"https://api.01.ai/v1/chat/completions\",\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Invocation\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Large Language Models (LLMs) have the potential to significantly impact healthcare by enhancing various aspects of patient care, research, and administrative processes. Here are some potential applications:\\n\\n1. **Clinical Documentation and Reporting**: LLMs can assist in generating patient reports and documentation by understanding and summarizing clinical notes, making the process more efficient and reducing the administrative burden on healthcare professionals.\\n\\n2. **Medical Coding and Billing**: These models can help in automating the coding process for medical billing by accurately translating clinical notes into standardized codes, reducing errors and improving billing efficiency.\\n\\n3. **Clinical Decision Support**: LLMs can analyze patient data and medical literature to provide evidence-based recommendations to healthcare providers, aiding in diagnosis and treatment planning.\\n\\n4. **Patient Education and Communication**: By simplifying medical jargon, LLMs can help in educating patients about their conditions, treatment options, and preventive care, improving patient engagement and health literacy.\\n\\n5. **Natural Language Processing (NLP) for EHRs**: LLMs can enhance NLP capabilities in Electronic Health Records (EHRs) systems, enabling better extraction of information from unstructured data, such as clinical notes, to support data-driven decision-making.\\n\\n6. **Drug Discovery and Development**: LLMs can analyze biomedical literature and clinical trial data to identify new drug candidates, predict drug interactions, and support the development of personalized medicine.\\n\\n7. **Telemedicine and Virtual Health Assistants**: Integrated into telemedicine platforms, LLMs can provide preliminary assessments and triage, offering patients basic health advice and determining the urgency of their needs, thus optimizing the utilization of healthcare resources.\\n\\n8. **Research and Literature Review**: LLMs can expedite the process of reviewing medical literature by quickly identifying relevant studies and summarizing findings, accelerating research and evidence-based practice.\\n\\n9. **Personalized Medicine**: By analyzing a patient's genetic information and medical history, LLMs can help in tailoring treatment plans and medication dosages, contributing to the advancement of personalized medicine.\\n\\n10. **Quality Improvement and Risk Assessment**: LLMs can analyze healthcare data to identify patterns that may indicate areas for quality improvement or potential risks, such as hospital-acquired infections or adverse drug events.\\n\\n11. **Mental Health Support**: LLMs can provide mental health support by offering coping strategies, mindfulness exercises, and preliminary assessments, serving as a complement to professional mental health services.\\n\\n12. **Continuing Medical Education (CME)**: LLMs can personalize CME by recommending educational content based on a healthcare provider's practice area, patient demographics, and emerging medical literature, ensuring that professionals stay updated with the latest advancements.\\n\\nWhile the applications of LLMs in healthcare are promising, it's crucial to address challenges such as data privacy, model bias, and the need for regulatory approval to ensure that these technologies are implemented safely and ethically.\", response_metadata={'token_usage': {'completion_tokens': 656, 'prompt_tokens': 40, 'total_tokens': 696}, 'model': 'yi-large'}, id='run-870850bd-e4bf-4265-8730-1736409c0acf-0')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"\n",
"messages = [\n",
" SystemMessage(content=\"You are an AI assistant specializing in technology trends.\"),\n",
" HumanMessage(\n",
" content=\"What are the potential applications of large language models in healthcare?\"\n",
" ),\n",
"]\n",
"\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Ich liebe das Programmieren.', response_metadata={'token_usage': {'completion_tokens': 8, 'prompt_tokens': 33, 'total_tokens': 41}, 'model': 'yi-large'}, id='run-daa3bc58-8289-4d72-a24e-80622fa90d6d-0')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatYi features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.yi.ChatYi.html"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all ChatYi features and configurations head to the API reference: https://python.langchain.com/api_reference/community/chat_models/langchain_community.chat_models.yi.ChatYi.html"
]
}
],
"metadata": {
"colab": {
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 0
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -1,466 +1,464 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Box\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# BoxLoader and BoxBlobLoader\n",
"\n",
"\n",
"The `langchain-box` package provides two methods to index your files from Box: `BoxLoader` and `BoxBlobLoader`. `BoxLoader` allows you to ingest text representations of files that have a text representation in Box. The `BoxBlobLoader` allows you download the blob for any document or image file for processing with the blob parser of your choice.\n",
"\n",
"This notebook details getting started with both of these. For detailed documentation of all BoxLoader features and configurations head to the API Reference pages for [BoxLoader](https://python.langchain.com/api_reference/box/document_loaders/langchain_box.document_loaders.box.BoxLoader.html) and [BoxBlobLoader](https://python.langchain.com/api_reference/box/document_loaders/langchain_box.blob_loaders.box.BoxBlobLoader.html).\n",
"\n",
"## Overview\n",
"\n",
"The `BoxLoader` class helps you get your unstructured content from Box in Langchain's `Document` format. You can do this with either a `List[str]` containing Box file IDs, or with a `str` containing a Box folder ID. \n",
"\n",
"The `BoxBlobLoader` class helps you get your unstructured content from Box in Langchain's `Blob` format. You can do this with a `List[str]` containing Box file IDs, a `str` containing a Box folder ID, a search query, or a `BoxMetadataQuery`. \n",
"\n",
"If getting files from a folder with folder ID, you can also set a `Bool` to tell the loader to get all sub-folders in that folder, as well. \n",
"\n",
":::info\n",
"A Box instance can contain Petabytes of files, and folders can contain millions of files. Be intentional when choosing what folders you choose to index. And we recommend never getting all files from folder 0 recursively. Folder ID 0 is your root folder.\n",
":::\n",
"\n",
"The `BoxLoader` will skip files without a text representation, while the `BoxBlobLoader` will return blobs for all document and image files.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support|\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [BoxLoader](https://python.langchain.com/api_reference/box/document_loaders/langchain_box.document_loaders.box.BoxLoader.html) | [langchain_box](https://python.langchain.com/api_reference/box/index.html) | ✅ | ❌ | ❌ | \n",
"| [BoxBlobLoader](https://python.langchain.com/api_reference/box/document_loaders/langchain_box.blob_loaders.box.BoxBlobLoader.html) | [langchain_box](https://python.langchain.com/api_reference/box/index.html) | ✅ | ❌ | ❌ | \n",
"### Loader features\n",
"| Source | Document Lazy Loading | Async Support\n",
"| :---: | :---: | :---: | \n",
"| BoxLoader | ✅ | ❌ | \n",
"| BoxBlobLoader | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"In order to use the Box package, you will need a few things:\n",
"\n",
"* A Box account — If you are not a current Box customer or want to test outside of your production Box instance, you can use a [free developer account](https://account.box.com/signup/n/developer#ty9l3).\n",
"* [A Box app](https://developer.box.com/guides/getting-started/first-application/) — This is configured in the [developer console](https://account.box.com/developers/console), and for Box AI, must have the `Manage AI` scope enabled. Here you will also select your authentication method\n",
"* The app must be [enabled by the administrator](https://developer.box.com/guides/authorization/custom-app-approval/#manual-approval). For free developer accounts, this is whomever signed up for the account.\n",
"\n",
"### Credentials\n",
"\n",
"For these examples, we will use [token authentication](https://developer.box.com/guides/authentication/tokens/developer-tokens). This can be used with any [authentication method](https://developer.box.com/guides/authentication/). Just get the token with whatever methodology. If you want to learn more about how to use other authentication types with `langchain-box`, visit the [Box provider](/docs/integrations/providers/box) document.\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
"cells": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"Enter your Box Developer Token: ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"box_developer_token = getpass.getpass(\"Enter your Box Developer Token: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Install **langchain_box**."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_box"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"### Load files\n",
"\n",
"If you wish to load files, you must provide the `List` of file ids at instantiation time. \n",
"\n",
"This requires 1 piece of information:\n",
"\n",
"* **box_file_ids** (`List[str]`)- A list of Box file IDs.\n",
"\n",
"#### BoxLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_box.document_loaders import BoxLoader\n",
"\n",
"box_file_ids = [\"1514555423624\", \"1514553902288\"]\n",
"\n",
"loader = BoxLoader(\n",
" box_developer_token=box_developer_token,\n",
" box_file_ids=box_file_ids,\n",
" character_limit=10000, # Optional. Defaults to no limit\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### BoxBlobLoader"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain_box.blob_loaders import BoxBlobLoader\n",
"\n",
"box_file_ids = [\"1514555423624\", \"1514553902288\"]\n",
"\n",
"loader = BoxBlobLoader(\n",
" box_developer_token=box_developer_token, box_file_ids=box_file_ids\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load from folder\n",
"\n",
"If you wish to load files from a folder, you must provide a `str` with the Box folder ID at instantiation time. \n",
"\n",
"This requires 1 piece of information:\n",
"\n",
"* **box_folder_id** (`str`)- A string containing a Box folder ID.\n",
"\n",
"#### BoxLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_box.document_loaders import BoxLoader\n",
"\n",
"box_folder_id = \"260932470532\"\n",
"\n",
"loader = BoxLoader(\n",
" box_folder_id=box_folder_id,\n",
" recursive=False, # Optional. return entire tree, defaults to False\n",
" character_limit=10000, # Optional. Defaults to no limit\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### BoxBlobLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_box.blob_loaders import BoxBlobLoader\n",
"\n",
"box_folder_id = \"260932470532\"\n",
"\n",
"loader = BoxBlobLoader(\n",
" box_folder_id=box_folder_id,\n",
" recursive=False, # Optional. return entire tree, defaults to False\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Search for files with BoxBlobLoader\n",
"\n",
"If you need to search for files, the `BoxBlobLoader` offers two methods. First you can perform a full text search with optional search options to narrow down that search.\n",
"\n",
"This requires 1 piece of information:\n",
"\n",
"* **query** (`str`)- A string containing the search query to perform.\n",
"\n",
"You can also provide a `BoxSearchOptions` object to narrow down that search\n",
"* **box_search_options** (`BoxSearchOptions`)\n",
"\n",
"#### BoxBlobLoader search"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_box.blob_loaders import BoxBlobLoader\n",
"from langchain_box.utilities import BoxSearchOptions, DocumentFiles, SearchTypeFilter\n",
"\n",
"box_folder_id = \"260932470532\"\n",
"\n",
"box_search_options = BoxSearchOptions(\n",
" ancestor_folder_ids=[box_folder_id],\n",
" search_type_filter=[SearchTypeFilter.FILE_CONTENT],\n",
" created_date_range=[\"2023-01-01T00:00:00-07:00\", \"2024-08-01T00:00:00-07:00,\"],\n",
" file_extensions=[DocumentFiles.DOCX, DocumentFiles.PDF],\n",
" k=200,\n",
" size_range=[1, 1000000],\n",
" updated_data_range=None,\n",
")\n",
"\n",
"loader = BoxBlobLoader(\n",
" box_developer_token=box_developer_token,\n",
" query=\"Victor\",\n",
" box_search_options=box_search_options,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also search for content based on Box Metadata. If your Box instance uses Metadata, you can search for any documents that have a specific Metadata Template attached that meet a certain criteria, like returning any invoices with a total greater than or equal to $500 that were created last quarter.\n",
"\n",
"This requires 1 piece of information:\n",
"\n",
"* **query** (`str`)- A string containing the search query to perform.\n",
"\n",
"You can also provide a `BoxSearchOptions` object to narrow down that search\n",
"* **box_search_options** (`BoxSearchOptions`)\n",
"\n",
"#### BoxBlobLoader Metadata query"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_box.blob_loaders import BoxBlobLoader\n",
"from langchain_box.utilities import BoxMetadataQuery\n",
"\n",
"query = BoxMetadataQuery(\n",
" template_key=\"enterprise_1234.myTemplate\",\n",
" query=\"total >= :value\",\n",
" query_params={\"value\": 100},\n",
" ancestor_folder_id=\"260932470532\",\n",
")\n",
"\n",
"loader = BoxBlobLoader(box_metadata_query=query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load\n",
"\n",
"#### BoxLoader"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': 'https://dl.boxcloud.com/api/2.0/internal_files/1514555423624/versions/1663171610024/representations/extracted_text/content/', 'title': 'Invoice-A5555_txt'}, page_content='Vendor: AstroTech Solutions\\nInvoice Number: A5555\\n\\nLine Items:\\n - Gravitational Wave Detector Kit: $800\\n - Exoplanet Terrarium: $120\\nTotal: $920')"
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Box\n",
"---"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs = loader.load()\n",
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'source': 'https://dl.boxcloud.com/api/2.0/internal_files/1514555423624/versions/1663171610024/representations/extracted_text/content/', 'title': 'Invoice-A5555_txt'}\n"
]
}
],
"source": [
"print(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### BoxBlobLoader"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
"cell_type": "markdown",
"metadata": {},
"source": [
"# BoxLoader and BoxBlobLoader\n",
"\n",
"\n",
"The `langchain-box` package provides two methods to index your files from Box: `BoxLoader` and `BoxBlobLoader`. `BoxLoader` allows you to ingest text representations of files that have a text representation in Box. The `BoxBlobLoader` allows you download the blob for any document or image file for processing with the blob parser of your choice.\n",
"\n",
"This notebook details getting started with both of these. For detailed documentation of all BoxLoader features and configurations head to the API Reference pages for [BoxLoader](https://python.langchain.com/api_reference/box/document_loaders/langchain_box.document_loaders.box.BoxLoader.html) and [BoxBlobLoader](https://python.langchain.com/api_reference/box/document_loaders/langchain_box.blob_loaders.box.BoxBlobLoader.html).\n",
"\n",
"## Overview\n",
"\n",
"The `BoxLoader` class helps you get your unstructured content from Box in Langchain's `Document` format. You can do this with either a `List[str]` containing Box file IDs, or with a `str` containing a Box folder ID.\n",
"\n",
"The `BoxBlobLoader` class helps you get your unstructured content from Box in Langchain's `Blob` format. You can do this with a `List[str]` containing Box file IDs, a `str` containing a Box folder ID, a search query, or a `BoxMetadataQuery`.\n",
"\n",
"If getting files from a folder with folder ID, you can also set a `Bool` to tell the loader to get all sub-folders in that folder, as well.\n",
"\n",
":::info\n",
"A Box instance can contain Petabytes of files, and folders can contain millions of files. Be intentional when choosing what folders you choose to index. And we recommend never getting all files from folder 0 recursively. Folder ID 0 is your root folder.\n",
":::\n",
"\n",
"The `BoxLoader` will skip files without a text representation, while the `BoxBlobLoader` will return blobs for all document and image files.\n",
"\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | JS support|\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [BoxLoader](https://python.langchain.com/api_reference/box/document_loaders/langchain_box.document_loaders.box.BoxLoader.html) | [langchain_box](https://python.langchain.com/api_reference/box/index.html) | ✅ | ❌ | ❌ |\n",
"| [BoxBlobLoader](https://python.langchain.com/api_reference/box/document_loaders/langchain_box.blob_loaders.box.BoxBlobLoader.html) | [langchain_box](https://python.langchain.com/api_reference/box/index.html) | ✅ | ❌ | ❌ |\n",
"### Loader features\n",
"| Source | Document Lazy Loading | Async Support\n",
"| :---: | :---: | :---: |\n",
"| BoxLoader | ✅ | ❌ |\n",
"| BoxBlobLoader | ✅ | ❌ |\n",
"\n",
"## Setup\n",
"\n",
"In order to use the Box package, you will need a few things:\n",
"\n",
"* A Box account — If you are not a current Box customer or want to test outside of your production Box instance, you can use a [free developer account](https://account.box.com/signup/n/developer#ty9l3).\n",
"* [A Box app](https://developer.box.com/guides/getting-started/first-application/) — This is configured in the [developer console](https://account.box.com/developers/console), and for Box AI, must have the `Manage AI` scope enabled. Here you will also select your authentication method\n",
"* The app must be [enabled by the administrator](https://developer.box.com/guides/authorization/custom-app-approval/#manual-approval). For free developer accounts, this is whomever signed up for the account.\n",
"\n",
"### Credentials\n",
"\n",
"For these examples, we will use [token authentication](https://developer.box.com/guides/authentication/tokens/developer-tokens). This can be used with any [authentication method](https://developer.box.com/guides/authentication/). Just get the token with whatever methodology. If you want to learn more about how to use other authentication types with `langchain-box`, visit the [Box provider](/docs/integrations/providers/box) document.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Blob(id='1514555423624' metadata={'source': 'https://app.box.com/0/260935730128/260931903795/Invoice-A5555.txt', 'name': 'Invoice-A5555.txt', 'file_size': 150} data=\"b'Vendor: AstroTech Solutions\\\\nInvoice Number: A5555\\\\n\\\\nLine Items:\\\\n - Gravitational Wave Detector Kit: $800\\\\n - Exoplanet Terrarium: $120\\\\nTotal: $920'\" mimetype='text/plain' path='https://app.box.com/0/260935730128/260931903795/Invoice-A5555.txt')\n",
"Blob(id='1514553902288' metadata={'source': 'https://app.box.com/0/260935730128/260931903795/Invoice-B1234.txt', 'name': 'Invoice-B1234.txt', 'file_size': 168} data=\"b'Vendor: Galactic Gizmos Inc.\\\\nInvoice Number: B1234\\\\nPurchase Order Number: 001\\\\nLine Items:\\\\n - Quantum Flux Capacitor: $500\\\\n - Anti-Gravity Pen Set: $75\\\\nTotal: $575'\" mimetype='text/plain' path='https://app.box.com/0/260935730128/260931903795/Invoice-B1234.txt')\n"
]
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdin",
"output_type": "stream",
"text": [
"Enter your Box Developer Token: ········\n"
]
}
],
"source": [
"import getpass\n",
"import os\n",
"\n",
"box_developer_token = getpass.getpass(\"Enter your Box Developer Token: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": "To enable automated tracing of your model calls, set your [LangSmith](https://docs.smith.langchain.com/) API key:"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Install **langchain_box**."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_box"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Initialization\n",
"\n",
"### Load files\n",
"\n",
"If you wish to load files, you must provide the `List` of file ids at instantiation time.\n",
"\n",
"This requires 1 piece of information:\n",
"\n",
"* **box_file_ids** (`List[str]`)- A list of Box file IDs.\n",
"\n",
"#### BoxLoader"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_box.document_loaders import BoxLoader\n",
"\n",
"box_file_ids = [\"1514555423624\", \"1514553902288\"]\n",
"\n",
"loader = BoxLoader(\n",
" box_developer_token=box_developer_token,\n",
" box_file_ids=box_file_ids,\n",
" character_limit=10000, # Optional. Defaults to no limit\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### BoxBlobLoader"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from langchain_box.blob_loaders import BoxBlobLoader\n",
"\n",
"box_file_ids = [\"1514555423624\", \"1514553902288\"]\n",
"\n",
"loader = BoxBlobLoader(\n",
" box_developer_token=box_developer_token, box_file_ids=box_file_ids\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load from folder\n",
"\n",
"If you wish to load files from a folder, you must provide a `str` with the Box folder ID at instantiation time.\n",
"\n",
"This requires 1 piece of information:\n",
"\n",
"* **box_folder_id** (`str`)- A string containing a Box folder ID.\n",
"\n",
"#### BoxLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_box.document_loaders import BoxLoader\n",
"\n",
"box_folder_id = \"260932470532\"\n",
"\n",
"loader = BoxLoader(\n",
" box_folder_id=box_folder_id,\n",
" recursive=False, # Optional. return entire tree, defaults to False\n",
" character_limit=10000, # Optional. Defaults to no limit\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### BoxBlobLoader"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_box.blob_loaders import BoxBlobLoader\n",
"\n",
"box_folder_id = \"260932470532\"\n",
"\n",
"loader = BoxBlobLoader(\n",
" box_folder_id=box_folder_id,\n",
" recursive=False, # Optional. return entire tree, defaults to False\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Search for files with BoxBlobLoader\n",
"\n",
"If you need to search for files, the `BoxBlobLoader` offers two methods. First you can perform a full text search with optional search options to narrow down that search.\n",
"\n",
"This requires 1 piece of information:\n",
"\n",
"* **query** (`str`)- A string containing the search query to perform.\n",
"\n",
"You can also provide a `BoxSearchOptions` object to narrow down that search\n",
"* **box_search_options** (`BoxSearchOptions`)\n",
"\n",
"#### BoxBlobLoader search"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from langchain_box.blob_loaders import BoxBlobLoader\n",
"from langchain_box.utilities import BoxSearchOptions, DocumentFiles, SearchTypeFilter\n",
"\n",
"box_folder_id = \"260932470532\"\n",
"\n",
"box_search_options = BoxSearchOptions(\n",
" ancestor_folder_ids=[box_folder_id],\n",
" search_type_filter=[SearchTypeFilter.FILE_CONTENT],\n",
" created_date_range=[\"2023-01-01T00:00:00-07:00\", \"2024-08-01T00:00:00-07:00,\"],\n",
" file_extensions=[DocumentFiles.DOCX, DocumentFiles.PDF],\n",
" k=200,\n",
" size_range=[1, 1000000],\n",
" updated_data_range=None,\n",
")\n",
"\n",
"loader = BoxBlobLoader(\n",
" box_developer_token=box_developer_token,\n",
" query=\"Victor\",\n",
" box_search_options=box_search_options,\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also search for content based on Box Metadata. If your Box instance uses Metadata, you can search for any documents that have a specific Metadata Template attached that meet a certain criteria, like returning any invoices with a total greater than or equal to $500 that were created last quarter.\n",
"\n",
"This requires 1 piece of information:\n",
"\n",
"* **query** (`str`)- A string containing the search query to perform.\n",
"\n",
"You can also provide a `BoxSearchOptions` object to narrow down that search\n",
"* **box_search_options** (`BoxSearchOptions`)\n",
"\n",
"#### BoxBlobLoader Metadata query"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_box.blob_loaders import BoxBlobLoader\n",
"from langchain_box.utilities import BoxMetadataQuery\n",
"\n",
"query = BoxMetadataQuery(\n",
" template_key=\"enterprise_1234.myTemplate\",\n",
" query=\"total >= :value\",\n",
" query_params={\"value\": 100},\n",
" ancestor_folder_id=\"260932470532\",\n",
")\n",
"\n",
"loader = BoxBlobLoader(box_metadata_query=query)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load\n",
"\n",
"#### BoxLoader"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Document(metadata={'source': 'https://dl.boxcloud.com/api/2.0/internal_files/1514555423624/versions/1663171610024/representations/extracted_text/content/', 'title': 'Invoice-A5555_txt'}, page_content='Vendor: AstroTech Solutions\\nInvoice Number: A5555\\n\\nLine Items:\\n - Gravitational Wave Detector Kit: $800\\n - Exoplanet Terrarium: $120\\nTotal: $920')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs = loader.load()\n",
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'source': 'https://dl.boxcloud.com/api/2.0/internal_files/1514555423624/versions/1663171610024/representations/extracted_text/content/', 'title': 'Invoice-A5555_txt'}\n"
]
}
],
"source": [
"print(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### BoxBlobLoader"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Blob(id='1514555423624' metadata={'source': 'https://app.box.com/0/260935730128/260931903795/Invoice-A5555.txt', 'name': 'Invoice-A5555.txt', 'file_size': 150} data=\"b'Vendor: AstroTech Solutions\\\\nInvoice Number: A5555\\\\n\\\\nLine Items:\\\\n - Gravitational Wave Detector Kit: $800\\\\n - Exoplanet Terrarium: $120\\\\nTotal: $920'\" mimetype='text/plain' path='https://app.box.com/0/260935730128/260931903795/Invoice-A5555.txt')\n",
"Blob(id='1514553902288' metadata={'source': 'https://app.box.com/0/260935730128/260931903795/Invoice-B1234.txt', 'name': 'Invoice-B1234.txt', 'file_size': 168} data=\"b'Vendor: Galactic Gizmos Inc.\\\\nInvoice Number: B1234\\\\nPurchase Order Number: 001\\\\nLine Items:\\\\n - Quantum Flux Capacitor: $500\\\\n - Anti-Gravity Pen Set: $75\\\\nTotal: $575'\" mimetype='text/plain' path='https://app.box.com/0/260935730128/260931903795/Invoice-B1234.txt')\n"
]
}
],
"source": [
"for blob in loader.yield_blobs():\n",
" print(f\"Blob({blob})\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load\n",
"\n",
"#### BoxLoader only"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"page = []\n",
"for doc in loader.lazy_load():\n",
" page.append(doc)\n",
" if len(page) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(page)\n",
"\n",
" page = []"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Extra fields\n",
"\n",
"All Box connectors offer the ability to select additional fields from the Box `FileFull` object to return as custom LangChain metadata. Each object accepts an optional `List[str]` called `extra_fields` containing the json key from the return object, like `extra_fields=[\"shared_link\"]`.\n",
"\n",
"The connector will add this field to the list of fields the integration needs to function and then add the results to the metadata returned in the `Document` or `Blob`, like `\"metadata\" : { \"source\" : \"source, \"shared_link\" : \"shared_link\" }`. If the field is unavailable for that file, it will be returned as an empty string, like `\"shared_link\" : \"\"`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all BoxLoader features and configurations head to the [API reference](https://python.langchain.com/api_reference/box/document_loaders/langchain_box.document_loaders.box.BoxLoader.html)\n",
"\n",
"\n",
"## Help\n",
"\n",
"If you have questions, you can check out our [developer documentation](https://developer.box.com) or reach out to use in our [developer community](https://community.box.com)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
}
],
"source": [
"for blob in loader.yield_blobs():\n",
" print(f\"Blob({blob})\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load\n",
"\n",
"#### BoxLoader only"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"page = []\n",
"for doc in loader.lazy_load():\n",
" page.append(doc)\n",
" if len(page) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(page)\n",
"\n",
" page = []"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Extra fields\n",
"\n",
"All Box connectors offer the ability to select additional fields from the Box `FileFull` object to return as custom LangChain metadata. Each object accepts an optional `List[str]` called `extra_fields` containing the json key from the return object, like `extra_fields=[\"shared_link\"]`. \n",
"\n",
"The connector will add this field to the list of fields the integration needs to function and then add the results to the metadata returned in the `Document` or `Blob`, like `\"metadata\" : { \"source\" : \"source, \"shared_link\" : \"shared_link\" }`. If the field is unavailable for that file, it will be returned as an empty string, like `\"shared_link\" : \"\"`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all BoxLoader features and configurations head to the [API reference](https://python.langchain.com/api_reference/box/document_loaders/langchain_box.document_loaders.box.BoxLoader.html)\n",
"\n",
"\n",
"## Help\n",
"\n",
"If you have questions, you can check out our [developer documentation](https://developer.box.com) or reach out to use in our [developer community](https://community.box.com)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat": 4,
"nbformat_minor": 4
}

Some files were not shown because too many files have changed in this diff Show More