Compare commits

..

87 Commits

Author SHA1 Message Date
Bagatur
8bd368d07e cli[patch]: Release 0.0.25 (#22876) 2024-06-14 02:31:04 +00:00
Isaac Francisco
75e966a2fa docs, cli[patch]: document loaders doc template (#22862)
From: https://github.com/langchain-ai/langchain/pull/22290

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-13 19:28:57 -07:00
Hayden Wolff
d1cdde267a docs: update NVIDIA Riva tool to use NVIDIA NIM for LLM (#22873)
**Description:**
Update the NVIDIA Riva tool documentation to use NVIDIA NIM for the LLM.
Show how to use NVIDIA NIMs and link to documentation for LangChain with
NIM.

---------

Co-authored-by: Hayden Wolff <hwolff@nvidia.com>
Co-authored-by: Isaac Francisco <78627776+isahers1@users.noreply.github.com>
2024-06-13 19:26:05 -07:00
Zeeshan Qureshi
ada1e5cc64 docs: s/path_images/images/ for ImageCaptionLoader keyword arguments (#22857)
Quick update to `ImageCaptionLoader` documentation to reflect what's in
code.
2024-06-13 18:37:12 -07:00
liuzc9
41e232cb82 Fix typo in vearch.md (#22840)
Fix typo
2024-06-13 18:24:51 -07:00
Kagura Chen
57783c5e55 Fix: lint errors and update Field alias in models.py and AutoSelectionScorer initialization (#22846)
This PR addresses several lint errors in the core package of LangChain.
Specifically, the following issues were fixed:

1.Unexpected keyword argument "required" for "Field"  [call-arg]
2.tests/integration_tests/chains/test_cpal.py:263: error: Unexpected
keyword argument "narrative_input" for "QueryModel" [call-arg]
2024-06-13 18:18:00 -07:00
Erick Friis
5bc774827b langchain: release 0.2.4 (#22872) 2024-06-14 00:14:48 +00:00
Erick Friis
7234fd0f51 core: release 0.2.6 (#22868) 2024-06-13 22:22:34 +00:00
Jacob Lee
bcbb43480c core[patch]: Treat type as a special field when merging lists (#22750)
Should we even log a warning? At least for Anthropic, it's expected to
get e.g. `text_block` followed by `text_delta`.

@ccurme @baskaryan @efriis
2024-06-13 15:08:24 -07:00
Nuno Campos
bae82e966a core: In astream_events v2 propagate cancel/break to the inner astream call (#22865)
- previous behavior was for the inner astream to continue running with
no interruption
- also propagate break in core runnable methods
2024-06-13 15:02:48 -07:00
Eugene Yurtsev
a766815a99 experimental[patch]/docs[patch]: Update links to security docs (#22864)
Minor update to newest version of security docs (content should be
identical).
2024-06-13 20:29:34 +00:00
Eugene Yurtsev
8f7cc73817 ci: Add script to check for pickle usage in community (#22863)
Add script to check for pickle usage in community.
2024-06-13 16:13:15 -04:00
Eugene Yurtsev
77209f315e community[patch]: FAISS VectorStore deserializer should be opt-in (#22861)
FAISS deserializer uses pickle module. Users have to opt-in to
de-serialize.
2024-06-13 15:48:13 -04:00
Eugene Yurtsev
ce0b0f22a1 experimental[major]: Force users to opt-in into code that relies on the python repl (#22860)
This should make it obvious that a few of the agents in langchain
experimental rely on the python REPL as a tool under the hood, and will
force users to opt-in.
2024-06-13 15:41:24 -04:00
Isaac Francisco
869523ad72 [docs]: added info for TavilySearchResults (#22765) 2024-06-13 12:14:11 -07:00
ccurme
42257b120f partners: fix numpy dep (#22858)
Following https://github.com/langchain-ai/langchain/pull/22813, which
added python 3.12 to CI, here we update numpy accordingly in partner
packages.
2024-06-13 14:46:42 -04:00
Isaac Francisco
345fd3a556 minor functionality change: adding API functionality to tavilysearch (#22761) 2024-06-13 11:10:28 -07:00
Isaac Francisco
034257e9bf docs: improved recursive url loader docs (#22648)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-13 11:09:35 -07:00
Isaac Francisco
e832bbb486 [docs]: bind tools (#22831) 2024-06-13 09:50:43 -07:00
ccurme
b626c3ca23 groq[patch]: add usage_metadata to (a)invoke and (a)stream (#22834) 2024-06-13 10:26:27 -04:00
Jacob Lee
e01e5d5a91 docs[patch]: Improve Groq integration page (#22844)
Was bare bones and got marked by folks as unhelpful.

CC @efriis @colemccracken
2024-06-13 03:40:29 -07:00
Jacob Lee
12eff6a130 docs[patch]: Readd Pydantic compatibility docs (#22836)
As a how-to guide.

CC @eyurtsev @hwchase17
2024-06-13 02:56:10 -07:00
Jacob Lee
cb654a3245 docs[patch]: Adds multimodal column to chat models table, move up in concepts (#22837)
CC @hwchase17 @baskaryan
2024-06-13 02:26:55 -07:00
James Braza
45b394268c core[patch]: allowing latest packaging versions (#22792)
Allowing version 24 of https://github.com/pypa/packaging

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-12 23:22:20 +00:00
Jacob Lee
00ad197502 docs[patch]: Add structured output to conceptual docs (#22791)
This downgrades `Function/tool calling` from a h3 to an h4 which means
it'll no longer show up in the right sidebar, but any direct links will
still work. I think that is ok, but LMK if you disapprove.

CC @hwchase17 @eyurtsev @rlancemartin
2024-06-12 15:30:51 -07:00
Karim Lalani
276be6cdd4 [experimental][llms][OllamaFunctions] tool calling related fixes (#22339)
Fixes issues with tool calling to handle tool objects correctly. Added
support to handle ToolMessage correctly.
Added additional checks for error conditions.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-06-12 16:34:43 -04:00
Christophe Bornet
d04e899b56 ci: add testing with Python 3.12 (#22813)
We need to use a different version of numpy for py3.8 and py3.12 in
pyproject.
And so do projects that use that Python version range and import
langchain.

    - **Twitter handle:** _cbornet
2024-06-12 16:31:36 -04:00
HyoJin Kang
b6bf2bb234 community[patch]: fix database uri type in SQLDatabase (#22661)
**Description**
sqlalchemy uses "sqlalchemy.engine.URL" type for db uri argument.
Added 'URL' type for compatibility.

**Issue**: None

**Dependencies:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-12 15:11:00 -04:00
Eugene Yurtsev
5dbbdcbf8e core[patch]: Update remaining root_validators (#22829)
This PR updates the remaining root_validators in core to either be explicit pre-init or post-init validators.
2024-06-12 14:47:40 -04:00
Eugene Yurtsev
265e650e64 community[patch]: Update root_validators embeddings: llamacpp, jina, dashscope, mosaicml, huggingface_hub, Toolkits: Connery, ChatModels: PAI_EAS, (#22828)
This PR updates root validators for:

* Embeddings: llamacpp, jina, dashscope, mosaicml, huggingface_hub
* Toolkits: Connery
* ChatModels: PAI_EAS

Following this issue:
https://github.com/langchain-ai/langchain/issues/22819
2024-06-12 13:59:05 -04:00
JonZeolla
32ba8cfab0 community[minor]: implement huggingface show_progress consistently (#22682)
- **Description:** This implements `show_progress` more consistently
(i.e. it is also added to the `HuggingFaceBgeEmbeddings` object).
- **Issue:** This implements `show_progress` more consistently in the
embeddings huggingface classes. Previously this could have been set via
`encode_kwargs`.
 - **Dependencies:** None
 - **Twitter handle:** @jonzeolla
2024-06-12 17:30:56 +00:00
Eugene Yurtsev
74e705250f core[patch]: update some root_validators (#22787)
Update some of the @root_validators to be explicit pre=True or
pre=False, skip_on_failure=True for pydantic 2 compatibility.
2024-06-12 13:04:57 -04:00
bincat
3d6e8547f9 docs: fix function name in tutorials/agents.ipynb (#22809)
the function called in the flowing example is `create_react_agent`, not
`create_tool_calling_executor `
2024-06-12 12:30:35 -04:00
mrhbj
a1268d9e9a community[patch]: fix hunyuan message include chinese signature error (#22795) (#22796)
… (#22795)

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-12 12:30:22 -04:00
Kagura Chen
513f1d8037 docs: update repo_structure.mdx to reflect latest code changes (#22810)
**Description:** This PR updates the documentation to reflect the recent
code changes.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-06-12 12:30:04 -04:00
Mr. Lance E Sloan «UMich»
08c466c603 community[patch]: bugfix for YoutubeLoader's LINES format (#22815)
- **Description:** A change I submitted recently introduced a bug in
`YoutubeLoader`'s `LINES` output format. In those conditions, curly
braces ("`{}`") creates a set, not a dictionary. This bugfix explicitly
specifies that a dictionary is created.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter:** lsloan_umich
- **Mastodon:**
[lsloan@mastodon.social](https://mastodon.social/@lsloan)
2024-06-12 12:29:34 -04:00
Philippe PRADOS
23c22fcbc9 langchain[minor]: Make EmbeddingsFilters async (#22737)
Add native async implementation for EmbeddingsFilter
2024-06-12 12:27:26 -04:00
endrajeet
b45bf78d2e Update index.mdx (#22818)
changed "# 🌟Recognition" to "### 🌟 Recognition" to match the rest of the
subheadings.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-06-12 12:27:16 -04:00
Bagatur
8203c1ff87 infra: lint new docs to match templates (#22786) 2024-06-11 13:26:35 -07:00
ccurme
936aedd10c mistral[patch]: add usage_metadata to (a)invoke and (a)stream (#22781) 2024-06-11 15:34:50 -04:00
Jiří Spilka
20e3662acf docs: Correct code examples in the Apify's notebooks (#22768)
**Description:** Correct code examples in the Apify document load
notebook and Apify Dataset notebook

**Issue**: None
**Dependencies**: None
**Twitter handle**: None
2024-06-11 15:20:16 -04:00
mrhbj
9212c9fcb8 community[patch]: fix hunyuan client json analysis (#22452) (#22767)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-11 19:05:18 +00:00
Rohan Aggarwal
86e8224cf1 community[patch]: Support for old clients (Thin and Thick) Oracle Vector Store (#22766)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
Support for old clients (Thin and Thick) Oracle Vector Store


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
Support for old clients (Thin and Thick) Oracle Vector Store

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
Have our own local tests

---------

Co-authored-by: rohan.aggarwal@oracle.com <rohaagga@phoenix95642.dev3sub2phx.databasede3phx.oraclevcn.com>
2024-06-11 11:36:06 -07:00
Jacob Lee
232908a46d docs[patch]: Adds streaming conceptual doc (#22760)
CC @hwchase17 @baskaryan
2024-06-11 11:03:52 -07:00
Mr. Lance E Sloan «UMich»
84dc2dd059 community[patch]: Load YouTube transcripts (captions) as fixed-duration chunks with start times (#21710)
- **Description:** Add a new format, `CHUNKS`, to
`langchain_community.document_loaders.youtube.YoutubeLoader` which
creates multiple `Document` objects from YouTube video transcripts
(captions), each of a fixed duration. The metadata of each chunk
`Document` includes the start time of each one and a URL to that time in
the video on the YouTube website.
  
I had implemented this for UMich (@umich-its-ai) in a local module, but
it makes sense to contribute this to LangChain community for all to
benefit and to simplify maintenance.

- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter:** lsloan_umich
- **Mastodon:**
[lsloan@mastodon.social](https://mastodon.social/@lsloan)

With regards to **tests and documentation**, most existing features of
the `YoutubeLoader` class are not tested. Only the
`YoutubeLoader.extract_video_id()` static method had a test. However,
while I was waiting for this PR to be reviewed and merged, I had time to
add a test for the chunking feature I've proposed in this PR.

I have added an example of using chunking to the
`docs/docs/integrations/document_loaders/youtube_transcript.ipynb`
notebook.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-11 17:44:36 +00:00
Aayush Kataria
71811e0547 community[minor]: Adds a vector store for Azure Cosmos DB for NoSQL (#21676)
This PR add supports for Azure Cosmos DB for NoSQL vector store.

Summary:

Description: added vector store integration for Azure Cosmos DB for
NoSQL Vector Store,
Dependencies: azure-cosmos dependency,
Tag maintainer: @hwchase17, @baskaryan @efriis @eyurtsev

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-06-11 10:34:01 -07:00
Mohammad Mohtashim
36cad5d25c [Community]: Added Metadata filter support for DocumentDB Vector Store (#22777)
- **Description:** As pointed out in this issue #22770, DocumentDB
`similarity_search` does not support filtering through metadata which
this PR adds by passing in the parameter `filter`. Also this PR fixes a
minor Documentation error.
- **Issue:** #22770

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-11 16:37:53 +00:00
Dmitry Stepanov
912751e268 Ollama vision support (#22734)
**Description:** Ollama vision with messages in OpenAI-style support `{
"image_url": { "url": ... } }`
**Issue:** #22460 

Added flexible solution for ChatOllama to support chat messages with
images. Works when you provide either `image_url` as a string or as a
dict with "url" inside (like OpenAI does). So it makes available to use
tuples with `ChatPromptTemplate.from_messages()`

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-11 16:10:19 +00:00
Philippe PRADOS
0908b01cb2 langchain[minor]: Add native async implementation to LLMFilter, add concurrency to both sync and async paths (#22739)
Thank you for contributing to LangChain!

- [ ] **PR title**: "langchain: Fix chain_filter.py to be compatible
with async"


- [ ] **PR message**: 
    - **Description:** chain_filter is not compatible with async.
    - **Twitter handle:** pprados


- [X ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Signed-off-by: zhangwangda <zhangwangda94@163.com>
Co-authored-by: Prakul <discover.prakul@gmail.com>
Co-authored-by: Lei Zhang <zhanglei@apache.org>
Co-authored-by: Gin <ictgtvt@gmail.com>
Co-authored-by: wangda <38549158+daziz@users.noreply.github.com>
Co-authored-by: Max Mulatz <klappradla@posteo.net>
2024-06-11 10:55:40 -04:00
Jaeyeon Kim(김재연)
ce4e29ae42 community[minor]: fix redis store docstring and streamline initialization code (#22730)
Thank you for contributing to LangChain!

### Description

Fix the example in the docstring of redis store.
Change the initilization logic and remove redundant check, enhance error
message.

### Issue

The example in docstring of how to use redis store was wrong.

![image](https://github.com/langchain-ai/langchain/assets/37469330/78c5d9ce-ee66-45b3-8dfe-ea29f125e6e9)

### Dependencies
Nothing



- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-11 14:08:05 +00:00
am-kinetica
ad101adec8 community[patch]: Kinetica Integrations handled error in querying; quotes in table names; updated gpudb API (#22724)
- [ ] **Miscellaneous updates and fixes**: 
- **Description:** Handled error in querying; quotes in table names;
updated gpudb API
- **Issue:** Threw an error with an error message difficult to
understand if a query failed or returned no records
    - **Dependencies:** Updated GPUDB API version to `7.2.0.9`


@baskaryan @hwchase17
2024-06-11 10:01:26 -04:00
NithinBairapaka
27b9ea14a5 docs: Updated integration docs with required package installations (#22392)
**Title:** Updated integration docs with required package installations
   **Issue:**  #22005
2024-06-11 01:44:05 +00:00
Albert Gil López
1710423de3 docs: correct path in readme (#22383)
Description: Fix incorrect path in README instructions.
Issue: N/A
Dependencies: None
Twitter handle: @jddam

---------

Co-authored-by: isaac hershenson <ihershenson@hmc.edu>
2024-06-10 17:47:39 -07:00
Greg Tracy
7e115da16c docs: Fix pixelation in stack graphic (#21554)
This change updates the stack graphic displayed in the top-level README.
The LangChain tile is pixelated in the current graphic.
2024-06-10 22:52:22 +00:00
Leonid Ganeline
55bd8e582b docs: integrations cache: added class table (#22368)
Added a table with the cache classes. See [this table
here](https://langchain-rnpqvikie-langchain.vercel.app/v0.2/docs/integrations/llm_caching/#cache-classes-summary-table).
2024-06-10 15:09:03 -07:00
Jacob Lee
89804c3026 docs: Adds pointers from LLM pages to equivalent chat model pages (#22759)
@baskaryan
2024-06-10 14:13:22 -07:00
Qingchuan Hao
7f180f996b docs: fix langchain expression language link (#22683) 2024-06-10 21:12:47 +00:00
Mathis Joffre
ea43f40daf community[minor]: Add support for OVHcloud AI Endpoints Embedding (#22667)
**Description:** Add support for [OVHcloud AI
Endpoints](https://endpoints.ai.cloud.ovh.net/) Embedding models.

Inspired by:
https://gist.github.com/gmasse/e1f99339e161f4830df6be5d0095349a

Signed-off-by: Joffref <mariusjoffre@gmail.com>
2024-06-10 21:07:25 +00:00
Erick Friis
2aaf86ddae core: fix mustache falsy cases (#22747) 2024-06-10 14:00:12 -07:00
Eugene Yurtsev
5a7eac191a core[patch]: Add missing type annotations (#22756)
Add missing type annotations.

The missing type annotations will raise exceptions with pydantic 2.
2024-06-10 16:59:41 -04:00
Eugene Yurtsev
05d31a2f00 community[patch]: Add missing type annotations (#22758)
Add missing type annotations to objects in community.
These missing type annotations will raise type errors in pydantic 2.
2024-06-10 16:59:28 -04:00
Naka Masato
3237909221 langchain[patch]: allow to use partial variables in create_sql_query_chain (#22688)
- **Description:** allow to use partial variables to pass `top_k` and
`table_info`
- **Issue:** no
- **Dependencies:** no
- **Twitter handle:** @gymnstcs

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-10 20:58:30 +00:00
Bharat Ramanathan
2b5631a6be community[patch]: fix WandbTracer to work with new "RunV2" API (#22673)
- **Description:** This PR updates the `WandbTracer` to work with the
new RunV2 API so that wandb Traces logging works correctly for new
LangChain versions. Here's an example
[run](https://wandb.ai/parambharat/langchain-tracing/runs/wpm99ftq) from
the existing tests
- **Issue:** https://github.com/wandb/wandb/issues/7762
- **Twitter handle:** @ParamBharat

_If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17._
2024-06-10 13:56:35 -07:00
Oguz Vuruskaner
f0f4532579 community[patch]: fix deepinfra inference (#22680)
This PR includes:

1. Update of default model to LLama3.
2. Handle some 400x errors with more user friendly error messages.
3. Handle user errors.
2024-06-10 13:55:55 -07:00
Lucas Tucker
cb79e80b0b docs: standardize ChatHuggingFace (#22693)
**Updated ChatHuggingFace doc string as per issue #22296**:
"langchain_huggingface: updated docstring for ChatHuggingFace in
langchain_huggingface to match that of the description (in the appendix)
provided in issue #22296. "

**Issue:** This PR is in response to issue #22296, and more specifically
ChatHuggingFace model. In particular, this PR updates the docstring for
langchain/libs/partners/hugging_face/langchain_huggingface/chat_models/huggingface.py
by adding the following sections: Instantiate, Invoke, Stream, Async,
Tool calling, and Response metadata. I used the template from the
Anthropic implementation and referenced the Appendix of the original
issue post. I also noted that: langchain_community hugging face llms do
not work with langchain_huggingface's ChatHuggingFace model (at least
for me); the .stream(messages) functionality of ChatHuggingFace only
returned a block of response.

---------

Co-authored-by: lucast2021 <lucast2021@headroyce.org>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-06-10 20:54:36 +00:00
Erick Friis
d92f2251c8 docs: couchbase partner package (#22757) 2024-06-10 20:53:03 +00:00
Tomaz Bratanic
76a193decc community[patch]: Add function response to graph cypher qa chain (#22690)
LLMs struggle with Graph RAG, because it's different from vector RAG in
a way that you don't provide the whole context, only the answer and the
LLM has to believe. However, that doesn't really work a lot of the time.
However, if you wrap the context as function response the accuracy is
much better.

btw... `union[LLMChain, Runnable]` is linting fun, that's why so many
ignores
2024-06-10 13:52:17 -07:00
X-HAN
34edfe4a16 community[minor]: add Volcengine Rerank (#22700)
**Description:** this PR adds Volcengine Rerank capability to Langchain,
you can find Volcengine Rerank API from
[here](https://www.volcengine.com/docs/84313/1254474) &
[here](https://www.volcengine.com/docs/84313/1254605).
[Volcengine](https://www.volcengine.com/) is a cloud service platform
developed by ByteDance, the parent company of TikTok. You can obtain
Volcengine API AK/SK from
[here](https://www.volcengine.com/docs/84313/1254553).

**Dependencies:** VolcengineRerank depends on `volcengine` python
package.

**Twitter handle:** my twitter/x account is https://x.com/LastMonopoly
and I'd like a mention, thank you!


**Tests and docs**
  1. integration test: `test_volcengine_rerank.py`
  2. example notebook: `volcengine_rerank.ipynb`

**Lint and test**: I have run `make format`, `make lint` and `make test`
from the root of the package I've modified.
2024-06-10 13:41:05 -07:00
Prakul
9eacce9356 docs:Update reference to langchain-mongodb (#22705)
**Description**: Update reference to langchain-mongodb
2024-06-10 13:35:21 -07:00
Ikko Eltociear Ashimine
4197c9c85f docs: update azure_container_apps_dynamic_sessions_data_analyst.ipynb (#22718)
colum -> column
2024-06-10 13:33:40 -07:00
Jacob Lee
e4183cbc4e docs[patch]: Add caution on OpenAI LLMs integration page (#22754)
@baskaryan do we like?

<img width="1040" alt="Screenshot 2024-06-10 at 12 16 45 PM"
src="https://github.com/langchain-ai/langchain/assets/6952323/8893063f-1acf-4a56-9ee5-a8a2b1560277">
2024-06-10 13:27:22 -07:00
Mohammad Mohtashim
c3cce98d86 community[patch]: Small Fix in OutlookMessageLoader (Close the Message once Open) (#22744)
- **Description:** A very small fix where we close the message when it
opened
- **Issue:** #22729
2024-06-10 13:08:39 -07:00
Bagatur
86a3f6edf1 docs: standardize ChatVertexAI (#22686)
Part of #22296. Part two of
https://github.com/langchain-ai/langchain-google/pull/287
2024-06-10 12:50:50 -07:00
ccurme
f9fdca6cc2 openai: add parallel_tool_calls to api ref (#22746)
![Screenshot 2024-06-10 at 1 41 24
PM](https://github.com/langchain-ai/langchain/assets/26529506/2626bf9c-41c6-4431-b2e1-f59de1e4e468)
2024-06-10 17:44:43 +00:00
Max Mulatz
058a64c563 Community[minor]: Add language parser for Elixir (#22742)
Hi 👋 

First off, thanks a ton for your work on this 💚 Really appreciate what
you're providing here for the community.

## Description

This PR adds a basic language parser for the
[Elixir](https://elixir-lang.org/) programming language. The parser code
is based upon the approach outlined in
https://github.com/langchain-ai/langchain/pull/13318: it's using
`tree-sitter` under the hood and aligns with all the other `tree-sitter`
based parses added that PR.

The `CHUNK_QUERY` I'm using here is probably not the most sophisticated
one, but it worked for my application. It's a starting point to provide
"core" parsing support for Elixir in LangChain. It enables people to use
the language parser out in real world applications which may then lead
to further tweaking of the queries. I consider this PR just the ground
work.

- **Dependencies:** requires `tree-sitter` and `tree-sitter-languages`
from the extended dependencies
- **Twitter handle:**`@bitcrowd`

## Checklist

- [x] **PR title**: "package: description"
- [x] **Add tests and docs**
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.

<!-- If no one reviews your PR within a few days, please @-mention one
of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17. -->
2024-06-10 15:56:57 +00:00
wangda
28e956735c docs:Correcting spelling mistakes in readme (#22664)
Signed-off-by: zhangwangda <zhangwangda94@163.com>
2024-06-10 15:33:41 +00:00
Gin
6f54abc252 docs: Add a missing dot in concepts.mdx (#22677) 2024-06-10 15:30:56 +00:00
Philippe PRADOS
2d4689d721 langchain[minor]: Add pgvector to list of supported vectorstores in self query retriever (#22678)
The fact that we outsourced pgvector to another project has an
unintended effect. The mapping dictionary found by
`_get_builtin_translator()` cannot recognize the new version of pgvector
because it comes from another package.
`SelfQueryRetriever` no longer knows `PGVector`.

I propose to fix this by creating a global dictionary that can be
populated by various database implementations. Thus, importing
`langchain_postgres` will allow the registration of the `PGvector`
mapping.

But for the moment I'm just adding a lazy import

Furthermore, the implementation of _get_builtin_translator()
reconstructs the BUILTIN_TRANSLATORS variable with each invocation,
which is not very efficient. A global map would be an optimization.

- **Twitter handle:** pprados

@eyurtsev, can you review this PR? And unlock the PR [Add async mode for
pgvector](https://github.com/langchain-ai/langchain-postgres/pull/32)
and PR [community[minor]: Add SQL storage
implementation](https://github.com/langchain-ai/langchain/pull/22207)?

Are you in favour of a global dictionary-based implementation of
Translator?
2024-06-10 11:27:47 -04:00
Lei Zhang
5ba1899cd7 infra: Scheduled GitHub Actions to run only on the upstream repository (#22707)
**Description:** Scheduled GitHub Actions to run only on the upstream
repository

**Issue:** Fixes #22706 

**Twitter handle:** @coolbeevip
2024-06-10 11:07:42 -04:00
Prakul
3f76c9e908 docs: Update MongoDB information in llm_caching (#22708)
**Description:**: Update MongoDB information in llm_caching
2024-06-10 11:05:55 -04:00
fzowl
c1fced9269 docs: VoyageAI new embedding and reranking models (#22719) 2024-06-09 09:12:43 -07:00
Enzo Poggio
8f019e91d7 community[patch]: Use Custom Logger Instead of Root Logger in get_user_agent Function (#22691)
## Description
This PR addresses a logging inconsistency in the `get_user_agent`
function. Previously, the function was using the root logger to log a
warning message when the "USER_AGENT" environment variable was not set.
This bypassed the custom logger `log` that was created at the start of
the module, leading to potential inconsistencies in logging behavior.

Changes:
- Replaced `logging.warning` with `log.warning` in the `get_user_agent`
function to ensure that the custom logger is used.

This change ensures that all logging in the `get_user_agent` function
respects the configurations of the custom logger, leading to more
consistent and predictable logging behavior.

## Dependencies

None

## Issue 

None

## Tests and docs

☝🏻 see description


## `make format`, `make lint` & `cd libs/community; make test`

```shell
> make format 
poetry run ruff format docs templates cookbook
1417 files left unchanged
poetry run ruff check --select I --fix docs templates cookbook
All checks passed!
```

```shell
> make lint
poetry run ruff check docs templates cookbook
All checks passed!
poetry run ruff format docs templates cookbook --diff
1417 files already formatted
poetry run ruff check --select I docs templates cookbook
All checks passed!
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
```

~cd libs/community; make test~ too much dependencies for integration ...

```shell
>  poetry run pytest tests/unit_tests   
....
==== 884 passed, 466 skipped, 4447 warnings in 15.93s ====
```

I choose you randomly : @ccurme
2024-06-08 02:33:07 +00:00
Philippe PRADOS
9aabb446c5 community[minor]: Add SQL storage implementation (#22207)
Hello @eyurtsev

- package: langchain-comminity
- **Description**: Add SQL implementation for docstore. A new
implementation, in line with my other PR ([async
PGVector](https://github.com/langchain-ai/langchain-postgres/pull/32),
[SQLChatMessageMemory](https://github.com/langchain-ai/langchain/pull/22065))
- Twitter handler: pprados

---------

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Piotr Mardziel <piotrm@gmail.com>
Co-authored-by: ChengZi <chen.zhang@zilliz.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-07 21:17:02 +00:00
Nithish Raghunandanan
f2f0e0e13d couchbase: Add the initial version of Couchbase partner package (#22087)
Co-authored-by: Nithish Raghunandanan <nithishr@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-06-07 14:04:08 -07:00
Cahid Arda Öz
6c07eb0c12 community[minor]: Add UpstashRatelimitHandler (#21885)
Adding `UpstashRatelimitHandler` callback for rate limiting based on
number of chain invocations or LLM token usage.

For more details, see [upstash/ratelimit-py
repository](https://github.com/upstash/ratelimit-py) or the notebook
guide included in this PR.

Twitter handle: @cahidarda

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-06-07 21:02:06 +00:00
Erick Friis
9b3ce16982 docs: remove nonexistent headings (#22685) 2024-06-07 20:02:06 +00:00
Erick Friis
9e03864d64 core: add error message for non-structured llm to StructuredPrompt (#22684)
previously was the blank `NotImplementedError` from
`BaseLanguageModel.with_structured_output`
2024-06-07 19:42:09 +00:00
285 changed files with 11264 additions and 4646 deletions

View File

@@ -24,6 +24,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
- "3.12"
name: "poetry run pytest -m compile tests/integration_tests #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -28,6 +28,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
- "3.12"
name: dependency checks ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4

View File

@@ -34,7 +34,7 @@ jobs:
# so linting on fewer versions makes CI faster.
python-version:
- "3.8"
- "3.11"
- "3.12"
steps:
- uses: actions/checkout@v4

View File

@@ -28,6 +28,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
- "3.12"
name: "make test #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -12,7 +12,7 @@ jobs:
strategy:
matrix:
python-version:
- "3.11"
- "3.12"
name: "check doc imports #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -7,6 +7,7 @@ on:
jobs:
check-links:
if: github.repository_owner == 'langchain-ai'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

View File

@@ -104,6 +104,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
- "3.12"
runs-on: ubuntu-latest
defaults:
run:

31
.github/workflows/check_new_docs.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
---
name: Integration docs lint
on:
push:
branches: [master]
pull_request:
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.10'
- id: files
uses: Ana06/get-changed-files@v2.2.0
- name: Check new docs
run: |
python docs/scripts/check_templates.py ${{ steps.files.outputs.added }}

View File

@@ -10,6 +10,7 @@ env:
jobs:
build:
if: github.repository_owner == 'langchain-ai'
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
runs-on: ubuntu-latest
strategy:

View File

@@ -273,7 +273,7 @@
"source": [
"# Tool schema for querying SQL db\n",
"class create_df_from_sql(BaseModel):\n",
" \"\"\"Execute a PostgreSQL SELECT statement and use the results to create a DataFrame with the given colum names.\"\"\"\n",
" \"\"\"Execute a PostgreSQL SELECT statement and use the results to create a DataFrame with the given column names.\"\"\"\n",
"\n",
" select_query: str = Field(..., description=\"A PostgreSQL SELECT statement.\")\n",
" # We're going to convert the results to a Pandas DataFrame that we pass\n",

View File

@@ -96,9 +96,9 @@ To make it as easy as possible to create custom chains, we've implemented a ["Ru
This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way.
The standard interface includes:
- [`stream`](#stream): stream back chunks of the response
- [`invoke`](#invoke): call the chain on an input
- [`batch`](#batch): call the chain on a list of inputs
- `stream`: stream back chunks of the response
- `invoke`: call the chain on an input
- `batch`: call the chain on a list of inputs
These also have corresponding async methods that should be used with [asyncio](https://docs.python.org/3/library/asyncio.html) `await` syntax for concurrency:
@@ -133,14 +133,14 @@ Some components LangChain implements, some components we rely on third-party int
<span data-heading-keywords="chat model,chat models"></span>
Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text).
These are traditionally newer models (older models are generally `LLMs`, see above).
These are traditionally newer models (older models are generally `LLMs`, see below).
Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages.
Although the underlying models are messages in, message out, the LangChain wrappers also allow these models to take a string as input. This means you can easily use chat models in place of LLMs.
When a string is passed in as input, it is converted to a HumanMessage and then passed to the underlying model.
When a string is passed in as input, it is converted to a `HumanMessage` and then passed to the underlying model.
LangChain does not provide any ChatModels, rather we rely on third party integrations.
LangChain does not host any Chat Models, rather we rely on third party integrations.
We have some standardized parameters when constructing ChatModels:
- `model`: the name of the model
@@ -155,17 +155,27 @@ Please see the [tool calling section](/docs/concepts/#functiontool-calling) for
For specifics on how to use chat models, see the [relevant how-to guides here](/docs/how_to/#chat-models).
#### Multimodality
Some chat models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal **outputs** are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures.
In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
For specifics on how to use multimodal models, see the [relevant how-to guides here](/docs/how_to/#multimodal).
For a full list of LangChain model providers with multimodal models, [check out this table](/docs/integrations/chat/#advanced-features).
### LLMs
<span data-heading-keywords="llm,llms"></span>
Language models that takes a string as input and returns a string.
These are traditionally older models (newer models generally are `ChatModels`, see below).
These are traditionally older models (newer models generally are [Chat Models](/docs/concepts/#chat-models), see below).
Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input.
This makes them interchangeable with ChatModels.
This gives them the same interface as [Chat Models](/docs/concepts/#chat-models).
When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model.
LangChain does not provide any LLMs, rather we rely on third party integrations.
LangChain does not host any LLMs, rather we rely on third party integrations.
For specifics on how to use LLMs, see the [relevant how-to guides here](/docs/how_to/#llms).
@@ -363,7 +373,7 @@ An essential component of a conversation is being able to refer to information i
At bare minimum, a conversational system should be able to access some window of past messages directly.
The concept of `ChatHistory` refers to a class in LangChain which can be used to wrap an arbitrary chain.
This `ChatHistory` will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database
This `ChatHistory` will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database.
Future interactions will then load those messages and pass them into the chain as part of the input.
### Documents
@@ -514,14 +524,6 @@ If you are still using AgentExecutor, do not fear: we still have a guide on [how
It is recommended, however, that you start to transition to LangGraph.
In order to assist in this we have put together a [transition guide on how to do so](/docs/how_to/migrate_agent).
### Multimodal
Some models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal **outputs** are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures.
In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
For specifics on how to use multimodal models, see the [relevant how-to guides here](/docs/how_to/#multimodal).
### Callbacks
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
@@ -596,7 +598,211 @@ For specifics on how to use callbacks, see the [relevant how-to guides here](/do
## Techniques
### Function/tool calling
### Streaming
Individual LLM calls often run for much longer than traditional resource requests.
This compounds when you build more complex chains or agents that require multiple reasoning steps.
Fortunately, LLMs generate output iteratively, which means it's possible to show sensible intermediate results
before the final response is ready. Consuming output as soon as it becomes available has therefore become a vital part of the UX
around building apps with LLMs to help alleviate latency issues, and LangChain aims to have first-class support for streaming.
Below, we'll discuss some concepts and considerations around streaming in LangChain.
#### Tokens
The unit that most model providers use to measure input and output is via a unit called a **token**.
Tokens are the basic units that language models read and generate when processing or producing text.
The exact definition of a token can vary depending on the specific way the model was trained -
for instance, in English, a token could be a single word like "apple", or a part of a word like "app".
When you send a model a prompt, the words and characters in the prompt are encoded into tokens using a **tokenizer**.
The model then streams back generated output tokens, which the tokenizer decodes into human-readable text.
The below example shows how OpenAI models tokenize `LangChain is cool!`:
![](/img/tokenization.png)
You can see that it gets split into 5 different tokens, and that the boundaries between tokens are not exactly the same as word boundaries.
The reason language models use tokens rather than something more immediately intuitive like "characters"
has to do with how they process and understand text. At a high-level, language models iteratively predict their next generated output based on
the initial input and their previous generations. Training the model using tokens language models to handle linguistic
units (like words or subwords) that carry meaning, rather than individual characters, which makes it easier for the model
to learn and understand the structure of the language, including grammar and context.
Furthermore, using tokens can also improve efficiency, since the model processes fewer units of text compared to character-level processing.
#### Callbacks
The lowest level way to stream outputs from LLMs in LangChain is via the [callbacks](/docs/concepts/#callbacks) system. You can pass a
callback handler that handles the [`on_llm_new_token`](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html#langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.on_llm_new_token) event into LangChain components. When that component is invoked, any
[LLM](/docs/concepts/#llms) or [chat model](/docs/concepts/#chat-models) contained in the component calls
the callback with the generated token. Within the callback, you could pipe the tokens into some other destination, e.g. a HTTP response.
You can also handle the [`on_llm_end`](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html#langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.on_llm_end) event to perform any necessary cleanup.
You can see [this how-to section](/docs/how_to/#callbacks) for more specifics on using callbacks.
Callbacks were the first technique for streaming introduced in LangChain. While powerful and generalizable,
they can be unwieldy for developers. For example:
- You need to explicitly initialize and manage some aggregator or other stream to collect results.
- The execution order isn't explicitly guaranteed, and you could theoretically have a callback run after the `.invoke()` method finishes.
- Providers would often make you pass an additional parameter to stream outputs instead of returning them all at once.
- You would often ignore the result of the actual model call in favor of callback results.
#### `.stream()` and `.astream()`
LangChain also includes the `.stream()` method (and the equivalent `.astream()` method for [async](https://docs.python.org/3/library/asyncio.html) environments) as a more ergonomic streaming interface.
`.stream()` returns an iterator, which you can consume with a simple `for` loop. Here's an example with a chat model:
```python
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-sonnet-20240229")
for chunk in model.stream("what color is the sky?"):
print(chunk.content, end="|", flush=True)
```
For models (or other components) that don't support streaming natively, this iterator would just yield a single chunk, but
you could still use the same general pattern. Using `.stream()` will also automatically call the model in streaming mode
without the need to provide additional config.
The type of each outputted chunk depends on the type of component - for example, chat models yield [`AIMessageChunks`](https://api.python.langchain.com/en/latest/messages/langchain_core.messages.ai.AIMessageChunk.html).
Because this method is part of [LangChain Expression Language](/docs/concepts/#langchain-expression-language-lcel),
you can handle formatting differences from different outputs using an [output parser](/docs/concepts/#output-parsers) to transform
each yielded chunk.
You can check out [this guide](/docs/how_to/streaming/#using-stream) for more detail on how to use `.stream()`.
#### `.astream_events()`
While the `.stream()` method is easier to use than callbacks, it only returns one type of value. This is fine for single LLM calls,
but as you build more complex chains of several LLM calls together, you may want to use the intermediate values of
the chain alongside the final output - for example, returning sources alongside the final generation when building a chat
over documents app.
There are ways to do this using the aforementioned callbacks, or by constructing your chain in such a way that it passes intermediate
values to the end with something like [`.assign()`](/docs/how_to/passthrough/), but LangChain also includes an
`.astream_events()` method that combines the flexibility of callbacks with the ergonomics of `.stream()`. When called, it returns an iterator
which yields [various types of events](/docs/how_to/streaming/#event-reference) that you can filter and process according
to the needs of your project.
Here's one small example that prints just events containing streamed chat model output:
```python
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-sonnet-20240229")
prompt = ChatPromptTemplate.from_template("tell me a joke about {topic}")
parser = StrOutputParser()
chain = prompt | model | parser
async for event in chain.astream_events({"topic": "parrot"}, version="v2"):
kind = event["event"]
if kind == "on_chat_model_stream":
print(event, end="|", flush=True)
```
You can roughly think of it as an iterator over callback events (though the format differs) - and you can use it on almost all LangChain components!
See [this guide](/docs/how_to/streaming/#using-stream-events) for more detailed information on how to use `.astream_events()`.
### Structured output
LLMs are capable of generating arbitrary text. This enables the model to respond appropriately to a wide
range of inputs, but for some use-cases, it can be useful to constrain the LLM's output
to a specific format or structure. This is referred to as **structured output**.
For example, if the output is to be stored in a relational database,
it is much easier if the model generates output that adheres to a defined schema or format.
[Extracting specific information](/docs/tutorials/extraction/) from unstructured text is another
case where this is particularly useful. Most commonly, the output format will be JSON,
though other formats such as [YAML](/docs/how_to/output_parser_yaml/) can be useful too. Below, we'll discuss
a few ways to get structured output from models in LangChain.
#### `.with_structured_output()`
For convenience, some LangChain chat models support a `.with_structured_output()` method.
This method only requires a schema as input, and returns a dict or Pydantic object.
Generally, this method is only present on models that support one of the more advanced methods described below,
and will use one of them under the hood. It takes care of importing a suitable output parser and
formatting the schema in the right format for the model.
For more information, check out this [how-to guide](/docs/how_to/structured_output/#the-with_structured_output-method).
#### Raw prompting
The most intuitive way to get a model to structure output is to ask nicely.
In addition to your query, you can give instructions describing what kind of output you'd like, then
parse the output using an [output parser](/docs/concepts/#output-parsers) to convert the raw
model message or string output into something more easily manipulated.
The biggest benefit to raw prompting is its flexibility:
- Raw prompting does not require any special model features, only sufficient reasoning capability to understand
the passed schema.
- You can prompt for any format you'd like, not just JSON. This can be useful if the model you
are using is more heavily trained on a certain type of data, such as XML or YAML.
However, there are some drawbacks too:
- LLMs are non-deterministic, and prompting a LLM to consistently output data in the exactly correct format
for smooth parsing can be surprisingly difficult and model-specific.
- Individual models have quirks depending on the data they were trained on, and optimizing prompts can be quite difficult.
Some may be better at interpreting [JSON schema](https://json-schema.org/), others may be best with TypeScript definitions,
and still others may prefer XML.
While we'll next go over some ways that you can take advantage of features offered by
model providers to increase reliability, prompting techniques remain important for tuning your
results no matter what method you choose.
#### JSON mode
<span data-heading-keywords="json mode"></span>
Some models, such as [Mistral](/docs/integrations/chat/mistralai/), [OpenAI](/docs/integrations/chat/openai/),
[Together AI](/docs/integrations/chat/together/) and [Ollama](/docs/integrations/chat/ollama/),
support a feature called **JSON mode**, usually enabled via config.
When enabled, JSON mode will constrain the model's output to always be some sort of valid JSON.
Often they require some custom prompting, but it's usually much less burdensome and along the lines of,
`"you must always return JSON"`, and the [output is easier to parse](/docs/how_to/output_parser_json/).
It's also generally simpler and more commonly available than tool calling.
Here's an example:
```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain.output_parsers.json import SimpleJsonOutputParser
model = ChatOpenAI(
model="gpt-4o",
model_kwargs={ "response_format": { "type": "json_object" } },
)
prompt = ChatPromptTemplate.from_template(
"Answer the user's question to the best of your ability."
'You must always output a JSON object with an "answer" key and a "followup_question" key.'
"{question}"
)
chain = prompt | model | SimpleJsonOutputParser()
chain.invoke({ "question": "What is the powerhouse of the cell?" })
```
```
{'answer': 'The powerhouse of the cell is the mitochondrion. It is responsible for producing energy in the form of ATP through cellular respiration.',
'followup_question': 'Would you like to know more about how mitochondria produce energy?'}
```
For a full list of model providers that support JSON mode, see [this table](/docs/integrations/chat/#advanced-features).
#### Function/tool calling
:::info
We use the term tool calling interchangeably with function calling. Although
@@ -614,8 +820,10 @@ from unstructured text, you could give the model an "extraction" tool that takes
parameters matching the desired schema, then treat the generated output as your final
result.
A tool call includes a name, arguments dict, and an optional identifier. The
arguments dict is structured `{argument_name: argument_value}`.
For models that support it, tool calling can be very convenient. It removes the
guesswork around how best to prompt schemas in favor of a built-in model feature. It can also
more naturally support agentic flows, since you can just pass multiple tool schemas instead
of fiddling with enums or unions.
Many LLM providers, including [Anthropic](https://www.anthropic.com/),
[Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai),
@@ -632,14 +840,16 @@ LangChain provides a standardized interface for tool calling that is consistent
The standard interface consists of:
* `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call.
* `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call. This method accepts [LangChain tools](/docs/concepts/#tools) here.
* `AIMessage.tool_calls`: an attribute on the `AIMessage` returned from the model for accessing the tool calls requested by the model.
There are two main use cases for function/tool calling:
The following how-to guides are good practical resources for using function/tool calling:
- [How to return structured data from an LLM](/docs/how_to/structured_output/)
- [How to use a model to call tools](/docs/how_to/tool_calling/)
For a full list of model providers that support tool calling, [see this table](/docs/integrations/chat/#advanced-features).
### Retrieval
LangChain provides several advanced retrieval types. A full list is below, along with the following information:

View File

@@ -55,7 +55,7 @@ The below sections are listed roughly in order of increasing level of abstractio
### Expression Language
[LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language) is the fundamental way that most LangChain components fit together, and this section is designed to teach
[LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language-lcel) is the fundamental way that most LangChain components fit together, and this section is designed to teach
developers how to use it to build with LangChain's primitives effectively.
This section should contains **Tutorials** that teach how to stream and use LCEL primitives for more abstract tasks, **Explanations** of specific behaviors,

View File

@@ -48,7 +48,7 @@ In a similar vein, we do enforce certain linting, formatting, and documentation
If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
we do not want these to get in the way of getting good code into the codebase.
# 🌟 Recognition
### 🌟 Recognition
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.

View File

@@ -15,12 +15,22 @@ Here's the structure visualized as a tree:
├── cookbook # Tutorials and examples
├── docs # Contains content for the documentation here: https://python.langchain.com/
├── libs
│ ├── langchain # Main package
│ ├── langchain
│ │ ├── langchain
│ │ ├── tests/unit_tests # Unit tests (present in each package not shown for brevity)
│ │ ├── tests/integration_tests # Integration tests (present in each package not shown for brevity)
│ ├── langchain-community # Third-party integrations
│ ├── langchain-core # Base interfaces for key abstractions
│ ├── langchain-experimental # Experimental components and chains
│ ├── community # Third-party integrations
│ ├── langchain-community
│ ├── core # Base interfaces for key abstractions
│ │ ├── langchain-core
│ ├── experimental # Experimental components and chains
│ │ ├── langchain-experimental
| ├── cli # Command line interface
│ │ ├── langchain-cli
│ ├── text-splitters
│ │ ├── langchain-text-splitters
│ ├── standard-tests
│ │ ├── langchain-standard-tests
│ ├── partners
│ ├── langchain-partner-1
│ ├── langchain-partner-2

View File

@@ -14,6 +14,7 @@ For comprehensive descriptions of every class and function see the [API Referenc
## Installation
- [How to: install LangChain packages](/docs/how_to/installation/)
- [How to: use LangChain with different Pydantic versions](/docs/how_to/pydantic_compatibility)
## Key features

View File

@@ -60,7 +60,7 @@
" * document addition by id (`add_documents` method with `ids` argument)\n",
" * delete by id (`delete` method with `ids` argument)\n",
"\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
"Compatible Vectorstores: `Aerospike`, `AnalyticDB`, `AstraDB`, `AwaDB`, `AzureCosmosDBNoSqlVectorSearch`, `AzureCosmosDBVectorSearch`, `Bagel`, `Cassandra`, `Chroma`, `CouchbaseVectorStore`, `DashVector`, `DatabricksVectorSearch`, `DeepLake`, `Dingo`, `ElasticVectorSearch`, `ElasticsearchStore`, `FAISS`, `HanaDB`, `Milvus`, `MyScale`, `OpenSearchVectorSearch`, `PGVector`, `Pinecone`, `Qdrant`, `Redis`, `Rockset`, `ScaNN`, `SupabaseVectorStore`, `SurrealDBStore`, `TimescaleVector`, `Vald`, `VDMS`, `Vearch`, `VespaStore`, `Weaviate`, `Yellowbrick`, `ZepVectorStore`, `TencentVectorDB`, `OpenSearchVectorSearch`.\n",
" \n",
"## Caution\n",
"\n",

View File

@@ -94,7 +94,7 @@
"source": [
"## LCEL\n",
"\n",
"Output parsers implement the [Runnable interface](/docs/concepts#interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"Output parsers implement the [Runnable interface](/docs/concepts#interface), the basic building block of the [LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language-lcel). This means they support `invoke`, `ainvoke`, `stream`, `astream`, `batch`, `abatch`, `astream_log` calls.\n",
"\n",
"Output parsers accept a string or `BaseMessage` as input and can return an arbitrary type."
]

View File

@@ -0,0 +1,105 @@
# How to use LangChain with different Pydantic versions
- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/)
- v2 contains has a number of breaking changes (https://docs.pydantic.dev/2.0/migration/)
- Pydantic v2 and v1 are under the same package name, so both versions cannot be installed at the same time
## LangChain Pydantic migration plan
As of `langchain>=0.0.267`, LangChain will allow users to install either Pydantic V1 or V2.
* Internally LangChain will continue to [use V1](https://docs.pydantic.dev/latest/migration/#continue-using-pydantic-v1-features).
* During this time, users can pin their pydantic version to v1 to avoid breaking changes, or start a partial
migration using pydantic v2 throughout their code, but avoiding mixing v1 and v2 code for LangChain (see below).
User can either pin to pydantic v1, and upgrade their code in one go once LangChain has migrated to v2 internally, or they can start a partial migration to v2, but must avoid mixing v1 and v2 code for LangChain.
Below are two examples of showing how to avoid mixing pydantic v1 and v2 code in
the case of inheritance and in the case of passing objects to LangChain.
**Example 1: Extending via inheritance**
**YES**
```python
from pydantic.v1 import root_validator, validator
class CustomTool(BaseTool): # BaseTool is v1 code
x: int = Field(default=1)
def _run(*args, **kwargs):
return "hello"
@validator('x') # v1 code
@classmethod
def validate_x(cls, x: int) -> int:
return 1
CustomTool(
name='custom_tool',
description="hello",
x=1,
)
```
Mixing Pydantic v2 primitives with Pydantic v1 primitives can raise cryptic errors
**NO**
```python
from pydantic import Field, field_validator # pydantic v2
class CustomTool(BaseTool): # BaseTool is v1 code
x: int = Field(default=1)
def _run(*args, **kwargs):
return "hello"
@field_validator('x') # v2 code
@classmethod
def validate_x(cls, x: int) -> int:
return 1
CustomTool(
name='custom_tool',
description="hello",
x=1,
)
```
**Example 2: Passing objects to LangChain**
**YES**
```python
from langchain_core.tools import Tool
from pydantic.v1 import BaseModel, Field # <-- Uses v1 namespace
class CalculatorInput(BaseModel):
question: str = Field()
Tool.from_function( # <-- tool uses v1 namespace
func=lambda question: 'hello',
name="Calculator",
description="useful for when you need to answer questions about math",
args_schema=CalculatorInput
)
```
**NO**
```python
from langchain_core.tools import Tool
from pydantic import BaseModel, Field # <-- Uses v2 namespace
class CalculatorInput(BaseModel):
question: str = Field()
Tool.from_function( # <-- tool uses v1 namespace
func=lambda question: 'hello',
name="Calculator",
description="useful for when you need to answer questions about math",
args_schema=CalculatorInput
)
```

View File

@@ -14,7 +14,7 @@
"We will cover two approaches:\n",
"\n",
"1. Using the built-in [create_retrieval_chain](https://api.python.langchain.com/en/latest/chains/langchain.chains.retrieval.create_retrieval_chain.html), which returns sources by default;\n",
"2. Using a simple [LCEL](/docs/concepts#langchain-expression-language) implementation, to show the operating principle."
"2. Using a simple [LCEL](/docs/concepts#langchain-expression-language-lcel) implementation, to show the operating principle."
]
},
{

View File

@@ -33,6 +33,8 @@
"\n",
"## The `.with_structured_output()` method\n",
"\n",
"<span data-heading-keywords=\"with_structured_output\"></span>\n",
"\n",
":::info Supported models\n",
"\n",
"You can find a [list of models that support this method here](/docs/integrations/chat/).\n",

View File

@@ -167,13 +167,83 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"llm_with_tools = llm.bind_tools(tools)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also use the `tool_choice` parameter to ensure certain behavior. For example, we can force our tool to call the multiply tool by using the following code:"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_9cViskmLvPnHjXk9tbVla5HA', 'function': {'arguments': '{\"a\":2,\"b\":4}', 'name': 'Multiply'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 9, 'prompt_tokens': 103, 'total_tokens': 112}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-095b827e-2bdd-43bb-8897-c843f4504883-0', tool_calls=[{'name': 'Multiply', 'args': {'a': 2, 'b': 4}, 'id': 'call_9cViskmLvPnHjXk9tbVla5HA'}], usage_metadata={'input_tokens': 103, 'output_tokens': 9, 'total_tokens': 112})"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_forced_to_multiply = llm.bind_tools(tools, tool_choice=\"Multiply\")\n",
"llm_forced_to_multiply.invoke(\"what is 2 + 4\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Even if we pass it something that doesn't require multiplcation - it will still call the tool!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can also just force our tool to select at least one of our tools by passing in the \"any\" (or \"required\" which is OpenAI specific) keyword to the `tool_choice` parameter."
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W', 'function': {'arguments': '{\"a\":1,\"b\":2}', 'name': 'Add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 15, 'prompt_tokens': 94, 'total_tokens': 109}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-28f75260-9900-4bed-8cd3-f1579abb65e5-0', tool_calls=[{'name': 'Add', 'args': {'a': 1, 'b': 2}, 'id': 'call_mCSiJntCwHJUBfaHZVUB2D8W'}], usage_metadata={'input_tokens': 94, 'output_tokens': 15, 'total_tokens': 109})"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_forced_to_use_tool = llm.bind_tools(tools, tool_choice=\"any\")\n",
"llm_forced_to_use_tool.invoke(\"What day is today?\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we can see, even though the prompt didn't really suggest a tool call, our LLM made one since it was forced to do so. You can look at the docs for [`bind_tool`](https://api.python.langchain.com/en/latest/chat_models/langchain_openai.chat_models.base.BaseChatOpenAI.html#langchain_openai.chat_models.base.BaseChatOpenAI.bind_tools) to learn about all the ways to customize how your LLM selects tools."
]
},
{
"cell_type": "markdown",
"metadata": {},
@@ -711,7 +781,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -36,7 +36,7 @@
"\n",
"When using 3rd party tools, make sure that you understand how the tool works, what permissions\n",
"it has. Read over its documentation and check if anything is required from you\n",
"from a security point of view. Please see our [security](https://python.langchain.com/v0.1/docs/security/) \n",
"from a security point of view. Please see our [security](https://python.langchain.com/v0.2/docs/security/) \n",
"guidelines for more information.\n",
"\n",
":::\n",

View File

@@ -0,0 +1,245 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Upstash Ratelimit Callback\n",
"\n",
"In this guide, we will go over how to add rate limiting based on number of requests or the number of tokens using `UpstashRatelimitHandler`. This handler uses [ratelimit library of Upstash](https://github.com/upstash/ratelimit-py/), which utilizes [Upstash Redis](https://upstash.com/docs/redis/overall/getstarted).\n",
"\n",
"Upstash Ratelimit works by sending an HTTP request to Upstash Redis everytime the `limit` method is called. Remaining tokens/requests of the user are checked and updated. Based on the remaining tokens, we can stop the execution of costly operations like invoking an LLM or querying a vector store:\n",
"\n",
"```py\n",
"response = ratelimit.limit()\n",
"if response.allowed:\n",
" execute_costly_operation()\n",
"```\n",
"\n",
"`UpstashRatelimitHandler` allows you to incorporate the ratelimit logic into your chain in a few minutes.\n",
"\n",
"First, you will need to go to [the Upstash Console](https://console.upstash.com/login) and create a redis database ([see our docs](https://upstash.com/docs/redis/overall/getstarted)). After creating a database, you will need to set the environment variables:\n",
"\n",
"```\n",
"UPSTASH_REDIS_REST_URL=\"****\"\n",
"UPSTASH_REDIS_REST_TOKEN=\"****\"\n",
"```\n",
"\n",
"Next, you will need to install Upstash Ratelimit and Redis library with:\n",
"\n",
"```\n",
"pip install upstash-ratelimit upstash-redis\n",
"```\n",
"\n",
"You are now ready to add rate limiting to your chain!\n",
"\n",
"## Ratelimiting Per Request\n",
"\n",
"Let's imagine that we want to allow our users to invoke our chain 10 times per minute. Achieving this is as simple as:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error in UpstashRatelimitHandler.on_chain_start callback: UpstashRatelimitError('Request limit reached!')\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Handling ratelimit. <class 'langchain_community.callbacks.upstash_ratelimit_callback.UpstashRatelimitError'>\n"
]
}
],
"source": [
"# set env variables\n",
"import os\n",
"\n",
"os.environ[\"UPSTASH_REDIS_REST_URL\"] = \"****\"\n",
"os.environ[\"UPSTASH_REDIS_REST_TOKEN\"] = \"****\"\n",
"\n",
"from langchain_community.callbacks import UpstashRatelimitError, UpstashRatelimitHandler\n",
"from langchain_core.runnables import RunnableLambda\n",
"from upstash_ratelimit import FixedWindow, Ratelimit\n",
"from upstash_redis import Redis\n",
"\n",
"# create ratelimit\n",
"ratelimit = Ratelimit(\n",
" redis=Redis.from_env(),\n",
" # 10 requests per window, where window size is 60 seconds:\n",
" limiter=FixedWindow(max_requests=10, window=60),\n",
")\n",
"\n",
"# create handler\n",
"user_id = \"user_id\" # should be a method which gets the user id\n",
"handler = UpstashRatelimitHandler(identifier=user_id, request_ratelimit=ratelimit)\n",
"\n",
"# create mock chain\n",
"chain = RunnableLambda(str)\n",
"\n",
"# invoke chain with handler:\n",
"try:\n",
" result = chain.invoke(\"Hello world!\", config={\"callbacks\": [handler]})\n",
"except UpstashRatelimitError:\n",
" print(\"Handling ratelimit.\", UpstashRatelimitError)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note that we pass the handler to the `invoke` method instead of passing the handler when defining the chain.\n",
"\n",
"For rate limiting algorithms other than `FixedWindow`, see [upstash-ratelimit docs](https://github.com/upstash/ratelimit-py?tab=readme-ov-file#ratelimiting-algorithms).\n",
"\n",
"Before executing any steps in our pipeline, ratelimit will check whether the user has passed the request limit. If so, `UpstashRatelimitError` is raised.\n",
"\n",
"## Ratelimiting Per Token\n",
"\n",
"Another option is to rate limit chain invokations based on:\n",
"1. number of tokens in prompt\n",
"2. number of tokens in prompt and LLM completion\n",
"\n",
"This only works if you have an LLM in your chain. Another requirement is that the LLM you are using should return the token usage in it's `LLMOutput`.\n",
"\n",
"### How it works\n",
"\n",
"The handler will get the remaining tokens before calling the LLM. If the remaining tokens is more than 0, LLM will be called. Otherwise `UpstashRatelimitError` will be raised.\n",
"\n",
"After LLM is called, token usage information will be used to subtracted from the remaining tokens of the user. No error is raised at this stage of the chain.\n",
"\n",
"### Configuration\n",
"\n",
"For the first configuration, simply initialize the handler like this:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ratelimit = Ratelimit(\n",
" redis=Redis.from_env(),\n",
" # 1000 tokens per window, where window size is 60 seconds:\n",
" limiter=FixedWindow(max_requests=1000, window=60),\n",
")\n",
"\n",
"handler = UpstashRatelimitHandler(identifier=user_id, token_ratelimit=ratelimit)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For the second configuration, here is how to initialize the handler:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ratelimit = Ratelimit(\n",
" redis=Redis.from_env(),\n",
" # 1000 tokens per window, where window size is 60 seconds:\n",
" limiter=FixedWindow(max_requests=1000, window=60),\n",
")\n",
"\n",
"handler = UpstashRatelimitHandler(\n",
" identifier=user_id,\n",
" token_ratelimit=ratelimit,\n",
" include_output_tokens=True, # set to True\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also employ ratelimiting based on requests and tokens at the same time, simply by passing both `request_ratelimit` and `token_ratelimit` parameters.\n",
"\n",
"Here is an example with a chain utilizing an LLM:"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Error in UpstashRatelimitHandler.on_llm_start callback: UpstashRatelimitError('Token limit reached!')\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Handling ratelimit. <class 'langchain_community.callbacks.upstash_ratelimit_callback.UpstashRatelimitError'>\n"
]
}
],
"source": [
"# set env variables\n",
"import os\n",
"\n",
"os.environ[\"UPSTASH_REDIS_REST_URL\"] = \"****\"\n",
"os.environ[\"UPSTASH_REDIS_REST_TOKEN\"] = \"****\"\n",
"os.environ[\"OPENAI_API_KEY\"] = \"****\"\n",
"\n",
"from langchain_community.callbacks import UpstashRatelimitError, UpstashRatelimitHandler\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langchain_openai import ChatOpenAI\n",
"from upstash_ratelimit import FixedWindow, Ratelimit\n",
"from upstash_redis import Redis\n",
"\n",
"# create ratelimit\n",
"ratelimit = Ratelimit(\n",
" redis=Redis.from_env(),\n",
" # 500 tokens per window, where window size is 60 seconds:\n",
" limiter=FixedWindow(max_requests=500, window=60),\n",
")\n",
"\n",
"# create handler\n",
"user_id = \"user_id\" # should be a method which gets the user id\n",
"handler = UpstashRatelimitHandler(identifier=user_id, token_ratelimit=ratelimit)\n",
"\n",
"# create mock chain\n",
"as_str = RunnableLambda(str)\n",
"model = ChatOpenAI()\n",
"\n",
"chain = as_str | model\n",
"\n",
"# invoke chain with handler:\n",
"try:\n",
" result = chain.invoke(\"Hello world!\", config={\"callbacks\": [handler]})\n",
"except UpstashRatelimitError:\n",
" print(\"Handling ratelimit.\", UpstashRatelimitError)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "lc39",
"language": "python",
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.9.19"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -23,13 +23,11 @@
]
},
{
"cell_type": "raw",
"cell_type": "code",
"execution_count": null,
"id": "d83ba7de",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-openai"
]

View File

@@ -201,7 +201,7 @@
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
]
},
{

View File

@@ -2,33 +2,50 @@
"cells": [
{
"cell_type": "raw",
"id": "afaf8039",
"metadata": {},
"source": [
"---\n",
"sidebar_label: Google Cloud Vertex AI\n",
"keywords: [gemini, vertex, ChatVertexAI, gemini-pro]\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "e49f1e0d",
"metadata": {},
"source": [
"# ChatVertexAI\n",
"\n",
"Note: This is separate from the Google PaLM integration. Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
"This page provides a quick overview for getting started with VertexAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatVertexAI features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html).\n",
"\n",
"ChatVertexAI exposes all foundational models available in Google Cloud:\n",
"ChatVertexAI exposes all foundational models available in Google Cloud, like `gemini-1.5-pro`, `gemini-1.5-flash`, etc. For a full and updated list of available models visit [VertexAI documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/overview).\n",
"\n",
"- Gemini (`gemini-pro` and `gemini-pro-vision`)\n",
"- PaLM 2 for Text (`text-bison`)\n",
"- Codey for Code Generation (`codechat-bison`)\n",
":::info Google Cloud VertexAI vs Google PaLM\n",
"\n",
"For a full and updated list of available models visit [VertexAI documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/overview).\n",
"The Google Cloud VertexAI integration is separate from the [Google PaLM integration](/docs/integrations/chat/google_generative_ai/). Google has chosen to offer an enterprise version of PaLM through GCP, and this supports the models made available through there. \n",
"\n",
"By default, Google Cloud [does not use](https://cloud.google.com/vertex-ai/docs/generative-ai/data-governance#foundation_model_development) customer data to train its foundation models as part of Google Cloud`s AI/ML Privacy Commitment. More details about how Google processes data can also be found in [Google's Customer Data Processing Addendum (CDPA)](https://cloud.google.com/terms/data-processing-addendum).\n",
":::\n",
"\n",
"To use `Google Cloud Vertex AI` PaLM you must have the `langchain-google-vertexai` Python package installed and either:\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/chat/google_vertex_ai) | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [ChatVertexAI](https://api.python.langchain.com/en/latest/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html) | [langchain-google-vertexai](https://api.python.langchain.com/en/latest/google_vertexai_api_reference.html) | ❌ | beta | ✅ | ![PyPI - Downloads](https://img.shields.io/pypi/dm/langchain-google-vertexai?style=flat-square&label=%20) | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-google-vertexai?style=flat-square&label=%20) |\n",
"\n",
"### Model features\n",
"| [Tool calling](/docs/how_to/tool_calling/) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | Native async | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
"| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
"| ✅ | ✅ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | \n",
"\n",
"## Setup\n",
"\n",
"To access VertexAI models you'll need to create a Google Cloud Platform account, set up credentials, and install the `langchain-google-vertexai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"To use the integration you must:\n",
"- Have credentials configured for your environment (gcloud, workload identity, etc...)\n",
"- Store the path to a service account JSON file as the GOOGLE_APPLICATION_CREDENTIALS environment variable\n",
"\n",
@@ -37,432 +54,156 @@
"For more information, see: \n",
"- https://cloud.google.com/docs/authentication/application-default-credentials#GAC\n",
"- https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-google-vertexai"
"\n",
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_google_vertexai import ChatVertexAI"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" J'aime la programmation.\")"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"system = \"You are a helpful assistant who translate English to French\"\n",
"human = \"Translate this sentence from English to French. I love programming.\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chat = ChatVertexAI()\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke({})"
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"id": "0730d6a1-c893-4840-9817-5e5251676d5d",
"metadata": {},
"source": [
"Gemini doesn't support SystemMessage at the moment, but it can be added to the first human message in the row. If you want such behavior, just set the `convert_system_message_to_human` to `True`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"J'aime la programmation.\")"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"system = \"You are a helpful assistant who translate English to French\"\n",
"human = \"Translate this sentence from English to French. I love programming.\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"### Installation\n",
"\n",
"chat = ChatVertexAI(model=\"gemini-pro\", convert_system_message_to_human=True)\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke({})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If we want to construct a simple chain that takes user specified parameters:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' プログラミングが大好きです')"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"system = (\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
")\n",
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chat = ChatVertexAI()\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"Japanese\",\n",
" \"text\": \"I love programming\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Code generation chat models\n",
"You can now leverage the Codey API for code chat within Vertex AI. The model available is:\n",
"- `codechat-bison`: for code assistance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" ```python\n",
"def is_prime(n):\n",
" \"\"\"\n",
" Check if a number is prime.\n",
"\n",
" Args:\n",
" n: The number to check.\n",
"\n",
" Returns:\n",
" True if n is prime, False otherwise.\n",
" \"\"\"\n",
"\n",
" # If n is 1, it is not prime.\n",
" if n == 1:\n",
" return False\n",
"\n",
" # Iterate over all numbers from 2 to the square root of n.\n",
" for i in range(2, int(n ** 0.5) + 1):\n",
" # If n is divisible by any number from 2 to its square root, it is not prime.\n",
" if n % i == 0:\n",
" return False\n",
"\n",
" # If n is divisible by no number from 2 to its square root, it is prime.\n",
" return True\n",
"\n",
"\n",
"def find_prime_numbers(n):\n",
" \"\"\"\n",
" Find all prime numbers up to a given number.\n",
"\n",
" Args:\n",
" n: The upper bound for the prime numbers to find.\n",
"\n",
" Returns:\n",
" A list of all prime numbers up to n.\n",
" \"\"\"\n",
"\n",
" # Create a list of all numbers from 2 to n.\n",
" numbers = list(range(2, n + 1))\n",
"\n",
" # Iterate over the list of numbers and remove any that are not prime.\n",
" for number in numbers:\n",
" if not is_prime(number):\n",
" numbers.remove(number)\n",
"\n",
" # Return the list of prime numbers.\n",
" return numbers\n",
"```\n"
]
}
],
"source": [
"chat = ChatVertexAI(model=\"codechat-bison\", max_tokens=1000, temperature=0.5)\n",
"\n",
"message = chat.invoke(\"Write a Python function generating all prime numbers\")\n",
"print(message.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Full generation info\n",
"\n",
"We can use the `generate` method to get back extra metadata like [safety attributes](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai#safety_attribute_confidence_scoring) and not just chat completions\n",
"\n",
"Note that the `generation_info` will be different depending if you're using a gemini model or not.\n",
"\n",
"### Gemini model\n",
"\n",
"`generation_info` will include:\n",
"\n",
"- `is_blocked`: whether generation was blocked or not\n",
"- `safety_ratings`: safety ratings' categories and probability labels"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"from pprint import pprint\n",
"\n",
"from langchain_core.messages import HumanMessage\n",
"from langchain_google_vertexai import HarmBlockThreshold, HarmCategory"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'citation_metadata': None,\n",
" 'is_blocked': False,\n",
" 'safety_ratings': [{'blocked': False,\n",
" 'category': 'HARM_CATEGORY_HATE_SPEECH',\n",
" 'probability_label': 'NEGLIGIBLE'},\n",
" {'blocked': False,\n",
" 'category': 'HARM_CATEGORY_DANGEROUS_CONTENT',\n",
" 'probability_label': 'NEGLIGIBLE'},\n",
" {'blocked': False,\n",
" 'category': 'HARM_CATEGORY_HARASSMENT',\n",
" 'probability_label': 'NEGLIGIBLE'},\n",
" {'blocked': False,\n",
" 'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT',\n",
" 'probability_label': 'NEGLIGIBLE'}],\n",
" 'usage_metadata': {'candidates_token_count': 6,\n",
" 'prompt_token_count': 12,\n",
" 'total_token_count': 18}}\n"
]
}
],
"source": [
"human = \"Translate this sentence from English to French. I love programming.\"\n",
"messages = [HumanMessage(content=human)]\n",
"\n",
"\n",
"chat = ChatVertexAI(\n",
" model_name=\"gemini-pro\",\n",
" safety_settings={\n",
" HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_LOW_AND_ABOVE\n",
" },\n",
")\n",
"\n",
"result = chat.generate([messages])\n",
"pprint(result.generations[0][0].generation_info)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Non-gemini model\n",
"\n",
"`generation_info` will include:\n",
"\n",
"- `is_blocked`: whether generation was blocked or not\n",
"- `safety_attributes`: a dictionary mapping safety attributes to their scores"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'errors': (),\n",
" 'grounding_metadata': {'citations': [], 'search_queries': []},\n",
" 'is_blocked': False,\n",
" 'safety_attributes': [{'Derogatory': 0.1, 'Insult': 0.1, 'Sexual': 0.2}],\n",
" 'usage_metadata': {'candidates_billable_characters': 88.0,\n",
" 'candidates_token_count': 24.0,\n",
" 'prompt_billable_characters': 58.0,\n",
" 'prompt_token_count': 12.0}}\n"
]
}
],
"source": [
"chat = ChatVertexAI() # default is `chat-bison`\n",
"\n",
"result = chat.generate([messages])\n",
"pprint(result.generations[0][0].generation_info)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Tool calling (a.k.a. function calling) with Gemini\n",
"\n",
"We can pass tool definitions to Gemini models to get the model to invoke those tools when appropriate. This is useful not only for LLM-powered tool use but also for getting structured outputs out of models more generally.\n",
"\n",
"With `ChatVertexAI.bind_tools()`, we can easily pass in Pydantic classes, dict schemas, LangChain tools, or even functions as tools to the model. Under the hood these are converted to a Gemini tool schema, which looks like:\n",
"```python\n",
"{\n",
" \"name\": \"...\", # tool name\n",
" \"description\": \"...\", # tool description\n",
" \"parameters\": {...} # tool input schema as JSONSchema\n",
"}\n",
"```"
"The LangChain VertexAI integration lives in the `langchain-google-vertexai` package:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "652d6238-1f87-422a-b135-f5abbb8652fc",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='', additional_kwargs={'function_call': {'name': 'GetWeather', 'arguments': '{\"location\": \"San Francisco, CA\"}'}}, response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'citation_metadata': None, 'usage_metadata': {'prompt_token_count': 41, 'candidates_token_count': 7, 'total_token_count': 48}}, id='run-05e760dc-0682-4286-88e1-5b23df69b083-0', tool_calls=[{'name': 'GetWeather', 'args': {'location': 'San Francisco, CA'}, 'id': 'cd2499c4-4513-4059-bfff-5321b6e922d0'}])"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"from langchain.pydantic_v1 import BaseModel, Field\n",
"\n",
"\n",
"class GetWeather(BaseModel):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
"\n",
" location: str = Field(..., description=\"The city and state, e.g. San Francisco, CA\")\n",
"\n",
"\n",
"llm = ChatVertexAI(model=\"gemini-pro\", temperature=0)\n",
"llm_with_tools = llm.bind_tools([GetWeather])\n",
"ai_msg = llm_with_tools.invoke(\n",
" \"what is the weather like in San Francisco\",\n",
")\n",
"ai_msg"
"%pip install -qU langchain-google-vertexai"
]
},
{
"cell_type": "markdown",
"id": "a38cde65-254d-4219-a441-068766c0d4b5",
"metadata": {},
"source": [
"The tool calls can be access via the `AIMessage.tool_calls` attribute, where they are extracted in a model-agnostic format:"
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import ChatVertexAI\n",
"\n",
"llm = ChatVertexAI(\n",
" model=\"gemini-1.5-flash-001\",\n",
" temperature=0,\n",
" max_tokens=None,\n",
" max_retries=6,\n",
" stop=None,\n",
" # other params...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "2b4f3e15",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "62e0dbc3",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'GetWeather',\n",
" 'args': {'location': 'San Francisco, CA'},\n",
" 'id': 'cd2499c4-4513-4059-bfff-5321b6e922d0'}]"
"AIMessage(content=\"J'adore programmer. \\n\", response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'usage_metadata': {'prompt_token_count': 20, 'candidates_token_count': 7, 'total_token_count': 27}}, id='run-7032733c-d05c-4f0c-a17a-6c575fdd1ae0-0', usage_metadata={'input_tokens': 20, 'output_tokens': 7, 'total_tokens': 27})"
]
},
"execution_count": 3,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"ai_msg.tool_calls"
"messages = [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ),\n",
" (\"human\", \"I love programming.\"),\n",
"]\n",
"ai_msg = llm.invoke(messages)\n",
"ai_msg"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"J'adore programmer. \n",
"\n"
]
}
],
"source": [
"print(ai_msg.content)"
]
},
{
"cell_type": "markdown",
"id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
"metadata": {},
"source": [
"For a complete guide on tool calling [head here](/docs/how_to/function_calling)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Structured outputs\n",
"## Chaining\n",
"\n",
"Many applications require structured model outputs. Tool calling makes it much easier to do this reliably. The [with_structured_outputs](https://api.python.langchain.com/en/latest/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html) constructor provides a simple interface built on top of tool calling for getting structured outputs out of a model. For a complete guide on structured outputs [head here](/docs/how_to/structured_output).\n",
"\n",
"### ChatVertexAI.with_structured_outputs()\n",
"\n",
"To get structured outputs from our Gemini model all we need to do is to specify a desired schema, either as a Pydantic class or as a JSON schema, "
"We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"Person(name='Stefan', age=13)"
"AIMessage(content='Ich liebe Programmieren. \\n', response_metadata={'is_blocked': False, 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability_label': 'NEGLIGIBLE', 'blocked': False}], 'usage_metadata': {'prompt_token_count': 15, 'candidates_token_count': 8, 'total_token_count': 23}}, id='run-c71955fd-8dc1-422b-88a7-853accf4811b-0', usage_metadata={'input_tokens': 15, 'output_tokens': 8, 'total_tokens': 23})"
]
},
"execution_count": 6,
@@ -471,139 +212,36 @@
}
],
"source": [
"class Person(BaseModel):\n",
" \"\"\"Save information about a person.\"\"\"\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"\n",
" name: str = Field(..., description=\"The person's name.\")\n",
" age: int = Field(..., description=\"The person's age.\")\n",
"\n",
"\n",
"structured_llm = llm.with_structured_output(Person)\n",
"structured_llm.invoke(\"Stefan is already 13 years old\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### [Legacy] Using `create_structured_runnable()`\n",
"\n",
"The legacy wasy to get structured outputs is using the `create_structured_runnable` constructor:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_google_vertexai import create_structured_runnable\n",
"\n",
"chain = create_structured_runnable(Person, llm)\n",
"chain.invoke(\"My name is Erick and I'm 27 years old\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Asynchronous calls\n",
"\n",
"We can make asynchronous calls via the Runnables [Async Interface](/docs/concepts#interface)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# for running these examples in the notebook:\n",
"import asyncio\n",
"\n",
"import nest_asyncio\n",
"\n",
"nest_asyncio.apply()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' अहं प्रोग्रामनं प्रेमामि')"
]
},
"execution_count": null,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"system = (\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chat = ChatVertexAI(model=\"chat-bison\", max_tokens=1000, temperature=0.5)\n",
"chain = prompt | chat\n",
"\n",
"asyncio.run(\n",
" chain.ainvoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"Sanskrit\",\n",
" \"text\": \"I love programming\",\n",
" }\n",
" )\n",
"chain = prompt | llm\n",
"chain.invoke(\n",
" {\n",
" \"input_language\": \"English\",\n",
" \"output_language\": \"German\",\n",
" \"input\": \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
"metadata": {},
"source": [
"## Streaming calls\n",
"## API reference\n",
"\n",
"We can also stream outputs via the `stream` method:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" The five most populous countries in the world are:\n",
"1. China (1.4 billion)\n",
"2. India (1.3 billion)\n",
"3. United States (331 million)\n",
"4. Indonesia (273 million)\n",
"5. Pakistan (220 million)"
]
}
],
"source": [
"import sys\n",
"\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [(\"human\", \"List out the 5 most populous countries in the world\")]\n",
")\n",
"\n",
"chat = ChatVertexAI()\n",
"\n",
"chain = prompt | chat\n",
"\n",
"for chunk in chain.stream({}):\n",
" sys.stdout.write(chunk.content)\n",
" sys.stdout.flush()"
"For detailed documentation of all ChatVertexAI features and configurations, like how to send multimodal inputs and configure safety settings, head to the API reference: https://api.python.langchain.com/en/latest/chat_models/langchain_google_vertexai.chat_models.ChatVertexAI.html"
]
}
],
@@ -627,5 +265,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 4
"nbformat_minor": 5
}

View File

@@ -2,10 +2,15 @@
"cells": [
{
"cell_type": "raw",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Groq\n",
"keywords: [chatgroq]\n",
"---"
]
},
@@ -15,45 +20,67 @@
"source": [
"# Groq\n",
"\n",
"Install the langchain-groq package if not already installed:\n",
"LangChain supports integration with [Groq](https://groq.com/) chat models. Groq specializes in fast AI inference.\n",
"\n",
"```bash\n",
"pip install langchain-groq\n",
"```\n",
"\n",
"Request an [API key](https://wow.groq.com) and set it as an environment variable:\n",
"\n",
"```bash\n",
"export GROQ_API_KEY=<YOUR API KEY>\n",
"```\n",
"\n",
"Alternatively, you may configure the API key when you initialize ChatGroq."
"To get started, you'll first need to install the langchain-groq package:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain-groq"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Import the ChatGroq class and initialize it with a model:"
"Request an [API key](https://wow.groq.com) and set it as an environment variable:\n",
"\n",
"```bash\n",
"export GROQ_API_KEY=<YOUR API KEY>\n",
"```\n",
"\n",
"Alternatively, you may configure the API key when you initialize ChatGroq.\n",
"\n",
"Here's an example of it in action:"
]
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 8,
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"Low latency is crucial for Large Language Models (LLMs) because it directly impacts the user experience, model performance, and overall efficiency. Here are some reasons why low latency is essential for LLMs:\\n\\n1. **Real-time Interaction**: LLMs are often used in applications that require real-time interaction, such as chatbots, virtual assistants, and language translation. Low latency ensures that the model responds quickly to user input, providing a seamless and engaging experience.\\n2. **Conversational Flow**: In conversational AI, latency can disrupt the natural flow of conversation. Low latency helps maintain a smooth conversation, allowing users to respond quickly and naturally, without feeling like they're waiting for the model to catch up.\\n3. **Model Performance**: High latency can lead to increased error rates, as the model may struggle to keep up with the input pace. Low latency enables the model to process information more efficiently, resulting in better accuracy and performance.\\n4. **Scalability**: As the number of users and requests increases, low latency becomes even more critical. It allows the model to handle a higher volume of requests without sacrificing performance, making it more scalable and efficient.\\n5. **Resource Utilization**: Low latency can reduce the computational resources required to process requests. By minimizing latency, you can optimize resource allocation, reduce costs, and improve overall system efficiency.\\n6. **User Experience**: High latency can lead to frustration, abandonment, and a poor user experience. Low latency ensures that users receive timely responses, which is essential for building trust and satisfaction.\\n7. **Competitive Advantage**: In applications like customer service or language translation, low latency can be a key differentiator. It can provide a competitive advantage by offering a faster and more responsive experience, setting your application apart from others.\\n8. **Edge Computing**: With the increasing adoption of edge computing, low latency is critical for processing data closer to the user. This reduces latency even further, enabling real-time processing and analysis of data.\\n9. **Real-time Analytics**: Low latency enables real-time analytics and insights, which are essential for applications like sentiment analysis, trend detection, and anomaly detection.\\n10. **Future-Proofing**: As LLMs continue to evolve and become more complex, low latency will become even more critical. By prioritizing low latency now, you'll be better prepared to handle the demands of future LLM applications.\\n\\nIn summary, low latency is vital for LLMs because it ensures a seamless user experience, improves model performance, and enables efficient resource utilization. By prioritizing low latency, you can build more effective, scalable, and efficient LLM applications that meet the demands of real-time interaction and processing.\", response_metadata={'token_usage': {'completion_tokens': 541, 'prompt_tokens': 33, 'total_tokens': 574, 'completion_time': 1.499777658, 'prompt_time': 0.008344704, 'queue_time': None, 'total_time': 1.508122362}, 'model_name': 'llama3-70b-8192', 'system_fingerprint': 'fp_87cbfbbc4d', 'finish_reason': 'stop', 'logprobs': None}, id='run-49dad960-ace8-4cd7-90b3-2db99ecbfa44-0')"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_groq import ChatGroq"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"chat = ChatGroq(temperature=0, model_name=\"mixtral-8x7b-32768\")"
"from langchain_groq import ChatGroq\n",
"\n",
"chat = ChatGroq(\n",
" temperature=0,\n",
" model=\"llama3-70b-8192\",\n",
" # api_key=\"\" # Optional if not set as an environment variable\n",
")\n",
"\n",
"system = \"You are a helpful assistant.\"\n",
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke({\"text\": \"Explain the importance of low latency for LLMs.\"})"
]
},
{
@@ -62,97 +89,206 @@
"source": [
"You can view the available models [here](https://console.groq.com/docs/models).\n",
"\n",
"If you do not want to set your API key in the environment, you can pass it directly to the client:\n",
"```python\n",
"chat = ChatGroq(temperature=0, groq_api_key=\"YOUR_API_KEY\", model_name=\"mixtral-8x7b-32768\")\n",
"## Tool calling\n",
"\n",
"```"
"Groq chat models support [tool calling](/docs/how_to/tool_calling/) to generate output matching a specific schema. The model may choose to call multiple tools or the same tool multiple times if appropriate.\n",
"\n",
"Here's an example:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'name': 'get_current_weather',\n",
" 'args': {'location': 'San Francisco', 'unit': 'Celsius'},\n",
" 'id': 'call_pydj'},\n",
" {'name': 'get_current_weather',\n",
" 'args': {'location': 'Tokyo', 'unit': 'Celsius'},\n",
" 'id': 'call_jgq3'}]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from typing import Optional\n",
"\n",
"from langchain_core.tools import tool\n",
"\n",
"\n",
"@tool\n",
"def get_current_weather(location: str, unit: Optional[str]):\n",
" \"\"\"Get the current weather in a given location\"\"\"\n",
" return \"Cloudy with a chance of rain.\"\n",
"\n",
"\n",
"tool_model = chat.bind_tools([get_current_weather], tool_choice=\"auto\")\n",
"\n",
"res = tool_model.invoke(\"What is the weather like in San Francisco and Tokyo?\")\n",
"\n",
"res.tool_calls"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Write a prompt and invoke ChatGroq to create completions:"
"### `.with_structured_output()`\n",
"\n",
"You can also use the convenience [`.with_structured_output()`](/docs/how_to/structured_output/#the-with_structured_output-method) method to coerce `ChatGroq` into returning a structured output.\n",
"Here is an example:"
]
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Low Latency Large Language Models (LLMs) are a type of artificial intelligence model that can understand and generate human-like text. The term \"low latency\" refers to the model\\'s ability to process and respond to inputs quickly, with minimal delay.\\n\\nThe importance of low latency in LLMs can be explained through the following points:\\n\\n1. Improved user experience: In real-time applications such as chatbots, virtual assistants, and interactive games, users expect quick and responsive interactions. Low latency LLMs can provide instant feedback and responses, creating a more seamless and engaging user experience.\\n\\n2. Better decision-making: In time-sensitive scenarios, such as financial trading or autonomous vehicles, low latency LLMs can quickly process and analyze vast amounts of data, enabling faster and more informed decision-making.\\n\\n3. Enhanced accessibility: For individuals with disabilities, low latency LLMs can help create more responsive and inclusive interfaces, such as voice-controlled assistants or real-time captioning systems.\\n\\n4. Competitive advantage: In industries where real-time data analysis and decision-making are crucial, low latency LLMs can provide a competitive edge by enabling businesses to react more quickly to market changes, customer needs, or emerging opportunities.\\n\\n5. Scalability: Low latency LLMs can efficiently handle a higher volume of requests and interactions, making them more suitable for large-scale applications and services.\\n\\nIn summary, low latency is an essential aspect of LLMs, as it significantly impacts user experience, decision-making, accessibility, competitiveness, and scalability. By minimizing delays and response times, low latency LLMs can unlock new possibilities and applications for artificial intelligence in various industries and scenarios.')"
"Joke(setup='Why did the cat join a band?', punchline='Because it wanted to be the purr-cussionist!', rating=None)"
]
},
"execution_count": 29,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"system = \"You are a helpful assistant.\"\n",
"human = \"{text}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"from langchain_core.pydantic_v1 import BaseModel, Field\n",
"\n",
"chain = prompt | chat\n",
"chain.invoke({\"text\": \"Explain the importance of low latency LLMs.\"})"
"\n",
"class Joke(BaseModel):\n",
" \"\"\"Joke to tell user.\"\"\"\n",
"\n",
" setup: str = Field(description=\"The setup of the joke\")\n",
" punchline: str = Field(description=\"The punchline to the joke\")\n",
" rating: Optional[int] = Field(description=\"How funny the joke is, from 1 to 10\")\n",
"\n",
"\n",
"structured_llm = chat.with_structured_output(Joke)\n",
"\n",
"structured_llm.invoke(\"Tell me a joke about cats\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## `ChatGroq` also supports async and streaming functionality:"
"Behind the scenes, this takes advantage of the above tool calling functionality.\n",
"\n",
"## Async"
]
},
{
"cell_type": "code",
"execution_count": 32,
"execution_count": 12,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\"There's a star that shines up in the sky,\\nThe Sun, that makes the day bright and spry.\\nIt rises and sets,\\nIn a daily, predictable bet,\\nGiving life to the world, oh my!\")"
"AIMessage(content='Here is a limerick about the sun:\\n\\nThere once was a sun in the sky,\\nWhose warmth and light caught the eye,\\nIt shone bright and bold,\\nWith a fiery gold,\\nAnd brought life to all, as it flew by.', response_metadata={'token_usage': {'completion_tokens': 51, 'prompt_tokens': 18, 'total_tokens': 69, 'completion_time': 0.144614022, 'prompt_time': 0.00585394, 'queue_time': None, 'total_time': 0.150467962}, 'model_name': 'llama3-70b-8192', 'system_fingerprint': 'fp_2f30b0b571', 'finish_reason': 'stop', 'logprobs': None}, id='run-e42340ba-f0ad-4b54-af61-8308d8ec8256-0')"
]
},
"execution_count": 32,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatGroq(temperature=0, model_name=\"mixtral-8x7b-32768\")\n",
"chat = ChatGroq(temperature=0, model=\"llama3-70b-8192\")\n",
"prompt = ChatPromptTemplate.from_messages([(\"human\", \"Write a Limerick about {topic}\")])\n",
"chain = prompt | chat\n",
"await chain.ainvoke({\"topic\": \"The Sun\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Streaming"
]
},
{
"cell_type": "code",
"execution_count": 33,
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The moon's gentle glow\n",
"Illuminates the night sky\n",
"Peaceful and serene"
"Silvery glow bright\n",
"Luna's gentle light shines down\n",
"Midnight's gentle queen"
]
}
],
"source": [
"chat = ChatGroq(temperature=0, model_name=\"llama2-70b-4096\")\n",
"chat = ChatGroq(temperature=0, model=\"llama3-70b-8192\")\n",
"prompt = ChatPromptTemplate.from_messages([(\"human\", \"Write a haiku about {topic}\")])\n",
"chain = prompt | chat\n",
"for chunk in chain.stream({\"topic\": \"The Moon\"}):\n",
" print(chunk.content, end=\"\", flush=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Passing custom parameters\n",
"\n",
"You can pass other Groq-specific parameters using the `model_kwargs` argument on initialization. Here's an example of enabling JSON mode:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='{ \"response\": \"That\\'s a tough question! There are eight species of bears found in the world, and each one is unique and amazing in its own way. However, if I had to pick one, I\\'d say the giant panda is a popular favorite among many people. Who can resist those adorable black and white markings?\", \"followup_question\": \"Would you like to know more about the giant panda\\'s habitat and diet?\" }', response_metadata={'token_usage': {'completion_tokens': 89, 'prompt_tokens': 50, 'total_tokens': 139, 'completion_time': 0.249032839, 'prompt_time': 0.011134497, 'queue_time': None, 'total_time': 0.260167336}, 'model_name': 'llama3-70b-8192', 'system_fingerprint': 'fp_2f30b0b571', 'finish_reason': 'stop', 'logprobs': None}, id='run-558ce67e-8c63-43fe-a48f-6ecf181bc922-0')"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chat = ChatGroq(\n",
" model=\"llama3-70b-8192\", model_kwargs={\"response_format\": {\"type\": \"json_object\"}}\n",
")\n",
"\n",
"system = \"\"\"\n",
"You are a helpful assistant.\n",
"Always respond with a JSON object with two string keys: \"response\" and \"followup_question\".\n",
"\"\"\"\n",
"human = \"{question}\"\n",
"prompt = ChatPromptTemplate.from_messages([(\"system\", system), (\"human\", human)])\n",
"\n",
"chain = prompt | chat\n",
"\n",
"chain.invoke({\"question\": \"what bear is best?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -171,7 +307,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
"version": "3.10.5"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

View File

@@ -225,7 +225,7 @@
"source": [
"## Chaining\n",
"\n",
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
]
},
{

View File

@@ -13,7 +13,7 @@
"\n",
"## Prerequisites\n",
"\n",
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/integrations/tools/apify) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs."
"You need to have an existing dataset on the Apify platform. If you don't have one, please first check out [this notebook](/docs/integrations/tools/apify) on how to use Apify to extract content from documentation, knowledge bases, help centers, or blogs. This example shows how to load a dataset produced by the [Website Content Crawler](https://apify.com/apify/website-content-crawler)."
]
},
{
@@ -101,8 +101,10 @@
"outputs": [],
"source": [
"from langchain.indexes import VectorstoreIndexCreator\n",
"from langchain_community.document_loaders import ApifyDatasetLoader\n",
"from langchain_core.documents import Document"
"from langchain_community.utilities import ApifyWrapper\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAI\n",
"from langchain_openai.embeddings import OpenAIEmbeddings"
]
},
{
@@ -125,7 +127,7 @@
"metadata": {},
"outputs": [],
"source": [
"index = VectorstoreIndexCreator().from_loaders([loader])"
"index = VectorstoreIndexCreator(embedding=OpenAIEmbeddings()).from_loaders([loader])"
]
},
{
@@ -135,7 +137,7 @@
"outputs": [],
"source": [
"query = \"What is Apify?\"\n",
"result = index.query_with_sources(query)"
"result = index.query_with_sources(query, llm=OpenAI())"
]
},
{

View File

@@ -83,7 +83,7 @@
},
"outputs": [],
"source": [
"loader = ImageCaptionLoader(path_images=list_image_urls)\n",
"loader = ImageCaptionLoader(images=list_image_urls)\n",
"list_docs = loader.load()\n",
"list_docs"
]

View File

@@ -15,7 +15,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install gpudb==7.2.0.1"
"%pip install gpudb==7.2.0.9"
]
},
{
@@ -97,14 +97,14 @@
"# data and the `SCHEMA.TABLE` combination must exist in Kinetica.\n",
"\n",
"QUERY = \"select text, survey_id as source from SCHEMA.TABLE limit 10\"\n",
"snowflake_loader = KineticaLoader(\n",
"kl = KineticaLoader(\n",
" query=QUERY,\n",
" host=HOST,\n",
" username=USERNAME,\n",
" password=PASSWORD,\n",
" metadata_columns=[\"source\"],\n",
")\n",
"kinetica_documents = snowflake_loader.load()\n",
"kinetica_documents = kl.load()\n",
"print(kinetica_documents)"
]
}

View File

@@ -7,140 +7,99 @@
"source": [
"# Recursive URL\n",
"\n",
"We may want to process load all URLs under a root directory.\n",
"\n",
"For example, let's look at the [Python 3.9 Document](https://docs.python.org/3.9/).\n",
"\n",
"This has many interesting child pages that we may want to read in bulk.\n",
"\n",
"Of course, the `WebBaseLoader` can load a list of pages. \n",
"\n",
"But, the challenge is traversing the tree of child pages and actually assembling that list!\n",
" \n",
"We do this using the `RecursiveUrlLoader`.\n",
"\n",
"This also gives us the flexibility to exclude some children, customize the extractor, and more."
"The `RecursiveUrlLoader` lets you recursively scrape all child links from a root URL and parse them into Documents."
]
},
{
"cell_type": "markdown",
"id": "1be8094f",
"id": "947d29e7-3679-483d-973f-79ea3403a370",
"metadata": {},
"source": [
"# Parameters\n",
"- url: str, the target url to crawl.\n",
"- exclude_dirs: Optional[str], webpage directories to exclude.\n",
"- use_async: Optional[bool], wether to use async requests, using async requests is usually faster in large tasks. However, async will disable the lazy loading feature(the function still works, but it is not lazy). By default, it is set to False.\n",
"- extractor: Optional[Callable[[str], str]], a function to extract the text of the document from the webpage, by default it returns the page as it is. It is recommended to use tools like goose3 and beautifulsoup to extract the text. By default, it just returns the page as it is.\n",
"- max_depth: Optional[int] = None, the maximum depth to crawl. By default, it is set to 2. If you need to crawl the whole website, set it to a number that is large enough would simply do the job.\n",
"- timeout: Optional[int] = None, the timeout for each request, in the unit of seconds. By default, it is set to 10.\n",
"- prevent_outside: Optional[bool] = None, whether to prevent crawling outside the root url. By default, it is set to True."
"## Setup\n",
"\n",
"The `RecursiveUrlLoader` lives in the `langchain-community` package. There's no other required packages, though you will get richer default Document metadata if you have ``beautifulsoup4` installed as well."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "23c18539",
"id": "23359ab0-8056-4dee-8bff-c38dc079f17f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders.recursive_url_loader import RecursiveUrlLoader"
"%pip install -qU langchain-community beautifulsoup4"
]
},
{
"cell_type": "markdown",
"id": "6384c057",
"id": "07985766-e4e9-4ea1-8a18-924fa4f294e5",
"metadata": {},
"source": [
"Let's try a simple example."
"## Instantiation\n",
"\n",
"Now we can instantiate our document loader object and load Documents:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "55394afe",
"execution_count": 1,
"id": "cb208dcf-9ce9-4197-bc44-b80d20aa4e50",
"metadata": {},
"outputs": [],
"source": [
"from bs4 import BeautifulSoup as Soup\n",
"from langchain_community.document_loaders import RecursiveUrlLoader\n",
"\n",
"url = \"https://docs.python.org/3.9/\"\n",
"loader = RecursiveUrlLoader(\n",
" url=url, max_depth=2, extractor=lambda x: Soup(x, \"html.parser\").text\n",
")\n",
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "084fb2ce",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\n\\n\\n\\n\\nPython Frequently Asked Questions — Python 3.'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[0].page_content[:50]"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "13bd7e16",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'source': 'https://docs.python.org/3.9/library/index.html',\n",
" 'title': 'The Python Standard Library — Python 3.9.17 documentation',\n",
" 'language': None}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[-1].metadata"
" \"https://docs.python.org/3.9/\",\n",
" # max_depth=2,\n",
" # use_async=False,\n",
" # extractor=None,\n",
" # metadata_extractor=None,\n",
" # exclude_dirs=(),\n",
" # timeout=10,\n",
" # check_response_status=True,\n",
" # continue_on_failure=True,\n",
" # prevent_outside=True,\n",
" # base_url=None,\n",
" # ...\n",
")"
]
},
{
"cell_type": "markdown",
"id": "5866e5a6",
"id": "0fac4425-735f-487d-a12b-c8ed2a209039",
"metadata": {},
"source": [
"However, since it's hard to perform a perfect filter, you may still see some irrelevant results in the results. You can perform a filter on the returned documents by yourself, if it's needed. Most of the time, the returned results are good enough."
]
},
{
"cell_type": "markdown",
"id": "4ec8ecef",
"metadata": {},
"source": [
"Testing on LangChain docs."
"## Load\n",
"\n",
"Use ``.load()`` to synchronously load into memory all Documents, with one\n",
"Document per visited URL. Starting from the initial URL, we recurse through\n",
"all linked URLs up to the specified max_depth.\n",
"\n",
"Let's run through a basic example of how to use the `RecursiveUrlLoader` on the [Python 3.9 Documentation](https://docs.python.org/3.9/)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "349b5598",
"id": "a30843c8-4a59-43dc-bf60-f26532f0f8e1",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/bagatur/.pyenv/versions/3.9.1/lib/python3.9/html/parser.py:170: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features=\"xml\"` into the BeautifulSoup constructor.\n",
" k = self.parse_starttag(i)\n"
]
},
{
"data": {
"text/plain": [
"8"
"{'source': 'https://docs.python.org/3.9/',\n",
" 'content_type': 'text/html',\n",
" 'title': '3.9.19 Documentation',\n",
" 'language': None}"
]
},
"execution_count": 2,
@@ -149,10 +108,208 @@
}
],
"source": [
"url = \"https://js.langchain.com/docs/modules/memory/integrations/\"\n",
"loader = RecursiveUrlLoader(url=url)\n",
"docs = loader.load()\n",
"len(docs)"
"docs[0].metadata"
]
},
{
"cell_type": "markdown",
"id": "211856ed-6dd7-46c6-859e-11aaea9093db",
"metadata": {},
"source": [
"Great! The first document looks like the root page we started from. Let's look at the metadata of the next document"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "2d842c03-fab8-4097-9f4f-809b2e71c0ba",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'source': 'https://docs.python.org/3.9/using/index.html',\n",
" 'content_type': 'text/html',\n",
" 'title': 'Python Setup and Usage — Python 3.9.19 documentation',\n",
" 'language': None}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"docs[1].metadata"
]
},
{
"cell_type": "markdown",
"id": "f5714ace-7cc5-4c5c-9426-f68342880da0",
"metadata": {},
"source": [
"That url looks like a child of our root page, which is great! Let's move on from metadata to examine the content of one of our documents"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "51dc6c67-6857-4298-9472-08b147f3a631",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"<!DOCTYPE html>\n",
"\n",
"<html xmlns=\"http://www.w3.org/1999/xhtml\">\n",
" <head>\n",
" <meta charset=\"utf-8\" /><title>3.9.19 Documentation</title><meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n",
" \n",
" <link rel=\"stylesheet\" href=\"_static/pydoctheme.css\" type=\"text/css\" />\n",
" <link rel=\n"
]
}
],
"source": [
"print(docs[0].page_content[:300])"
]
},
{
"cell_type": "markdown",
"id": "d87cc239",
"metadata": {},
"source": [
"That certainly looks like HTML that comes from the url https://docs.python.org/3.9/, which is what we expected. Let's now look at some variations we can make to our basic example that can be helpful in different situations. "
]
},
{
"cell_type": "markdown",
"id": "8f41cc89",
"metadata": {},
"source": [
"## Adding an Extractor\n",
"\n",
"By default the loader sets the raw HTML from each link as the Document page content. To parse this HTML into a more human/LLM-friendly format you can pass in a custom ``extractor`` method:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "33a6f5b8",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/var/folders/td/vzm913rx77x21csd90g63_7c0000gn/T/ipykernel_10935/1083427287.py:6: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features=\"xml\"` into the BeautifulSoup constructor.\n",
" soup = BeautifulSoup(html, \"lxml\")\n",
"/Users/isaachershenson/.pyenv/versions/3.11.9/lib/python3.11/html/parser.py:170: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features=\"xml\"` into the BeautifulSoup constructor.\n",
" k = self.parse_starttag(i)\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"3.9.19 Documentation\n",
"\n",
"Download\n",
"Download these documents\n",
"Docs by version\n",
"\n",
"Python 3.13 (in development)\n",
"Python 3.12 (stable)\n",
"Python 3.11 (security-fixes)\n",
"Python 3.10 (security-fixes)\n",
"Python 3.9 (securit\n"
]
}
],
"source": [
"import re\n",
"\n",
"from bs4 import BeautifulSoup\n",
"\n",
"\n",
"def bs4_extractor(html: str) -> str:\n",
" soup = BeautifulSoup(html, \"lxml\")\n",
" return re.sub(r\"\\n\\n+\", \"\\n\\n\", soup.text).strip()\n",
"\n",
"\n",
"loader = RecursiveUrlLoader(\"https://docs.python.org/3.9/\", extractor=bs4_extractor)\n",
"docs = loader.load()\n",
"print(docs[0].page_content[:200])"
]
},
{
"cell_type": "markdown",
"id": "c8e8a826",
"metadata": {},
"source": [
"This looks much nicer!\n",
"\n",
"You can similarly pass in a `metadata_extractor` to customize how Document metadata is extracted from the HTTP response. See the [API reference](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.recursive_url_loader.RecursiveUrlLoader.html) for more on this."
]
},
{
"cell_type": "markdown",
"id": "1dddbc94",
"metadata": {},
"source": [
"## Lazy loading\n",
"\n",
"If we're loading a large number of Documents and our downstream operations can be done over subsets of all loaded Documents, we can lazily load our Documents one at a time to minimize our memory footprint:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "7d0114fc",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/var/folders/4j/2rz3865x6qg07tx43146py8h0000gn/T/ipykernel_73962/2110507528.py:6: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features=\"xml\"` into the BeautifulSoup constructor.\n",
" soup = BeautifulSoup(html, \"lxml\")\n"
]
}
],
"source": [
"page = []\n",
"for doc in loader.lazy_load():\n",
" page.append(doc)\n",
" if len(page) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(page)\n",
"\n",
" page = []"
]
},
{
"cell_type": "markdown",
"id": "f88a7c2f-35df-4c3a-b238-f91be2674b96",
"metadata": {},
"source": [
"In this example we never have more than 10 Documents loaded into memory at a time."
]
},
{
"cell_type": "markdown",
"id": "3e4d1c8f",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"These examples show just a few of the ways in which you can modify the default `RecursiveUrlLoader`, but there are many more modifications that can be made to best fit your use case. Using the parameters `link_regex` and `exclude_dirs` can help you filter out unwanted URLs, `aload()` and `alazy_load()` can be used for aynchronous loading, and more.\n",
"\n",
"For detailed information on configuring and calling the ``RecursiveUrlLoader``, please see the API reference: https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.recursive_url_loader.RecursiveUrlLoader.html."
]
}
],
@@ -172,7 +329,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
"version": "3.11.9"
}
},
"nbformat": 4,

View File

@@ -17,6 +17,7 @@
"- C++ (*)\n",
"- C# (*)\n",
"- COBOL\n",
"- Elixir\n",
"- Go (*)\n",
"- Java (*)\n",
"- JavaScript (requires package `esprima`)\n",

View File

@@ -113,7 +113,7 @@
"\n",
"LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.\n",
"\n",
"- **[Overview](/docs/concepts#langchain-expression-language)**: LCEL and its benefits\n",
"- **[Overview](/docs/concepts#langchain-expression-language-lcel)**: LCEL and its benefits\n",
"- **[Interface](/docs/concepts#interface)**: The standard interface for LCEL objects\n",
"- **[How-to](/docs/expression_language/how_to)**: Key features of LCEL\n",
"- **[Cookbook](/docs/expression_language/cookbook)**: Example code for accomplishing common tasks\n",

View File

@@ -15,47 +15,45 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "427d5745",
"metadata": {},
"source": "from langchain_community.document_loaders import YoutubeLoader",
"outputs": [],
"source": [
"from langchain_community.document_loaders import YoutubeLoader"
]
"execution_count": null
},
{
"cell_type": "code",
"execution_count": null,
"id": "34a25b57",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"%pip install --upgrade --quiet youtube-transcript-api"
]
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "code",
"execution_count": null,
"id": "bc8b308a",
"metadata": {},
"outputs": [],
"source": [
"loader = YoutubeLoader.from_youtube_url(\n",
" \"https://www.youtube.com/watch?v=QsYGlZkevEg\", add_video_info=False\n",
")"
]
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "code",
"execution_count": null,
"id": "d073dd36",
"metadata": {},
"outputs": [],
"source": [
"loader.load()"
]
],
"outputs": [],
"execution_count": null
},
{
"attachments": {},
@@ -68,26 +66,26 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "ba28af69",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet pytube"
]
],
"outputs": [],
"execution_count": null
},
{
"cell_type": "code",
"execution_count": null,
"id": "9b8ea390",
"metadata": {},
"outputs": [],
"source": [
"loader = YoutubeLoader.from_youtube_url(\n",
" \"https://www.youtube.com/watch?v=QsYGlZkevEg\", add_video_info=True\n",
")\n",
"loader.load()"
]
],
"outputs": [],
"execution_count": null
},
{
"attachments": {},
@@ -104,10 +102,8 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "08510625",
"metadata": {},
"outputs": [],
"source": [
"loader = YoutubeLoader.from_youtube_url(\n",
" \"https://www.youtube.com/watch?v=QsYGlZkevEg\",\n",
@@ -116,7 +112,41 @@
" translation=\"en\",\n",
")\n",
"loader.load()"
]
],
"outputs": [],
"execution_count": null
},
{
"metadata": {},
"cell_type": "markdown",
"source": [
"### Get transcripts as timestamped chunks\n",
"\n",
"Get one or more `Document` objects, each containing a chunk of the video transcript. The length of the chunks, in seconds, may be specified. Each chunk's metadata includes a URL of the video on YouTube, which will start the video at the beginning of the specific chunk.\n",
"\n",
"`transcript_format` param: One of the `langchain_community.document_loaders.youtube.TranscriptFormat` values. In this case, `TranscriptFormat.CHUNKS`.\n",
"\n",
"`chunk_size_seconds` param: An integer number of video seconds to be represented by each chunk of transcript data. Default is 120 seconds."
],
"id": "69f4e399a9764d73"
},
{
"metadata": {},
"cell_type": "code",
"source": [
"from langchain_community.document_loaders.youtube import TranscriptFormat\n",
"\n",
"loader = YoutubeLoader.from_youtube_url(\n",
" \"https://www.youtube.com/watch?v=TKCMw0utiak\",\n",
" add_video_info=True,\n",
" transcript_format=TranscriptFormat.CHUNKS,\n",
" chunk_size_seconds=30,\n",
")\n",
"print(\"\\n\\n\".join(map(repr, loader.load())))"
],
"id": "540bbf19182f38bc",
"outputs": [],
"execution_count": null
},
{
"attachments": {},
@@ -142,10 +172,8 @@
},
{
"cell_type": "code",
"execution_count": null,
"id": "c345bc43",
"metadata": {},
"outputs": [],
"source": [
"# Init the GoogleApiClient\n",
"from pathlib import Path\n",
@@ -170,7 +198,9 @@
"\n",
"# returns a list of Documents\n",
"youtube_loader_channel.load()"
]
],
"outputs": [],
"execution_count": null
}
],
"metadata": {

View File

@@ -0,0 +1,420 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Volcengine Reranker\n",
"\n",
"This notebook shows how to use Volcengine Reranker for document compression and retrieval. [Volcengine](https://www.volcengine.com/) is a cloud service platform developed by ByteDance, the parent company of TikTok.\n",
"\n",
"Volcengine's Rerank Service supports reranking up to 50 documents with a maximum of 4000 tokens. For more, please visit [here](https://www.volcengine.com/docs/84313/1254474) and [here](https://www.volcengine.com/docs/84313/1254605)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet volcengine"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet faiss\n",
"\n",
"# OR (depending on Python version)\n",
"\n",
"%pip install --upgrade --quiet faiss-cpu"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# To obtain ak/sk: https://www.volcengine.com/docs/84313/1254488\n",
"\n",
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"VOLC_API_AK\"] = getpass.getpass(\"Volcengine API AK:\")\n",
"os.environ[\"VOLC_API_SK\"] = getpass.getpass(\"Volcengine API SK:\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# Helper function for printing docs\n",
"def pretty_print_docs(docs):\n",
" print(\n",
" f\"\\n{'-' * 100}\\n\".join(\n",
" [f\"Document {i+1}:\\n\\n\" + d.page_content for i, d in enumerate(docs)]\n",
" )\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Set up the base vector store retriever\n",
"Let's start by initializing a simple vector store retriever and storing the 2023 State of the Union speech (in chunks). We can set up the retriever to retrieve a high number (20) of docs."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/terminator/Developer/langchain/.venv/lib/python3.11/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:11: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)\n",
" from tqdm.autonotebook import tqdm, trange\n",
"/Users/terminator/Developer/langchain/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.\n",
" warnings.warn(\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"We cannot let this happen. \n",
"\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 4:\n",
"\n",
"He will never extinguish their love of freedom. He will never weaken the resolve of the free world. \n",
"\n",
"We meet tonight in an America that has lived through two of the hardest years this nation has ever faced. \n",
"\n",
"The pandemic has been punishing. \n",
"\n",
"And so many families are living paycheck to paycheck, struggling to keep up with the rising cost of food, gas, housing, and so much more. \n",
"\n",
"I understand.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 5:\n",
"\n",
"As Ohio Senator Sherrod Brown says, “Its time to bury the label “Rust Belt.” \n",
"\n",
"Its time. \n",
"\n",
"But with all the bright spots in our economy, record job growth and higher wages, too many families are struggling to keep up with the bills. \n",
"\n",
"Inflation is robbing them of the gains they might otherwise feel. \n",
"\n",
"I get it. Thats why my top priority is getting prices under control.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 6:\n",
"\n",
"A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since shes been nominated, shes received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n",
"\n",
"And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 7:\n",
"\n",
"Its not only the right thing to do—its the economically smart thing to do. \n",
"\n",
"Thats why immigration reform is supported by everyone from labor unions to religious leaders to the U.S. Chamber of Commerce. \n",
"\n",
"Lets get it done once and for all. \n",
"\n",
"Advancing liberty and justice also requires protecting the rights of women. \n",
"\n",
"The constitutional right affirmed in Roe v. Wade—standing precedent for half a century—is under attack as never before.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 8:\n",
"\n",
"I understand. \n",
"\n",
"I remember when my Dad had to leave our home in Scranton, Pennsylvania to find work. I grew up in a family where if the price of food went up, you felt it. \n",
"\n",
"Thats why one of the first things I did as President was fight to pass the American Rescue Plan. \n",
"\n",
"Because people were hurting. We needed to act, and we did. \n",
"\n",
"Few pieces of legislation have done more in a critical moment in our history to lift us out of crisis.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 9:\n",
"\n",
"Third we can end the shutdown of schools and businesses. We have the tools we need. \n",
"\n",
"Its time for Americans to get back to work and fill our great downtowns again. People working from home can feel safe to begin to return to the office. \n",
"\n",
"Were doing that here in the federal government. The vast majority of federal workers will once again work in person. \n",
"\n",
"Our schools are open. Lets keep it that way. Our kids need to be in school.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 10:\n",
"\n",
"He met the Ukrainian people. \n",
"\n",
"From President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n",
"\n",
"Groups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland. \n",
"\n",
"In this struggle as President Zelenskyy said in his speech to the European Parliament “Light will win over darkness.” The Ukrainian Ambassador to the United States is here tonight.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 11:\n",
"\n",
"The widow of Sergeant First Class Heath Robinson. \n",
"\n",
"He was born a soldier. Army National Guard. Combat medic in Kosovo and Iraq. \n",
"\n",
"Stationed near Baghdad, just yards from burn pits the size of football fields. \n",
"\n",
"Heaths widow Danielle is here with us tonight. They loved going to Ohio State football games. He loved building Legos with their daughter. \n",
"\n",
"But cancer from prolonged exposure to burn pits ravaged Heaths lungs and body. \n",
"\n",
"Danielle says Heath was a fighter to the very end.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 12:\n",
"\n",
"Danielle says Heath was a fighter to the very end. \n",
"\n",
"He didnt know how to stop fighting, and neither did she. \n",
"\n",
"Through her pain she found purpose to demand we do better. \n",
"\n",
"Tonight, Danielle—we are. \n",
"\n",
"The VA is pioneering new ways of linking toxic exposures to diseases, already helping more veterans get benefits. \n",
"\n",
"And tonight, Im announcing were expanding eligibility to veterans suffering from nine respiratory cancers.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 13:\n",
"\n",
"We can do all this while keeping lit the torch of liberty that has led generations of immigrants to this land—my forefathers and so many of yours. \n",
"\n",
"Provide a pathway to citizenship for Dreamers, those on temporary status, farm workers, and essential workers. \n",
"\n",
"Revise our laws so businesses have the workers they need and families dont wait decades to reunite. \n",
"\n",
"Its not only the right thing to do—its the economically smart thing to do.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 14:\n",
"\n",
"He rejected repeated efforts at diplomacy. \n",
"\n",
"He thought the West and NATO wouldnt respond. And he thought he could divide us at home. Putin was wrong. We were ready. Here is what we did. \n",
"\n",
"We prepared extensively and carefully. \n",
"\n",
"We spent months building a coalition of other freedom-loving nations from Europe and the Americas to Asia and Africa to confront Putin.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 15:\n",
"\n",
"As Ive told Xi Jinping, it is never a good bet to bet against the American people. \n",
"\n",
"Well create good jobs for millions of Americans, modernizing roads, airports, ports, and waterways all across America. \n",
"\n",
"And well do it all to withstand the devastating effects of the climate crisis and promote environmental justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 16:\n",
"\n",
"Tonight I say to the Russian oligarchs and corrupt leaders who have bilked billions of dollars off this violent regime no more. \n",
"\n",
"The U.S. Department of Justice is assembling a dedicated task force to go after the crimes of Russian oligarchs. \n",
"\n",
"We are joining with our European allies to find and seize your yachts your luxury apartments your private jets. We are coming for your ill-begotten gains.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 17:\n",
"\n",
"Look at cars. \n",
"\n",
"Last year, there werent enough semiconductors to make all the cars that people wanted to buy. \n",
"\n",
"And guess what, prices of automobiles went up. \n",
"\n",
"So—we have a choice. \n",
"\n",
"One way to fight inflation is to drive down wages and make Americans poorer. \n",
"\n",
"I have a better plan to fight inflation. \n",
"\n",
"Lower your costs, not your wages. \n",
"\n",
"Make more cars and semiconductors in America. \n",
"\n",
"More infrastructure and innovation in America. \n",
"\n",
"More goods moving faster and cheaper in America.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 18:\n",
"\n",
"So thats my plan. It will grow the economy and lower costs for families. \n",
"\n",
"So what are we waiting for? Lets get this done. And while youre at it, confirm my nominees to the Federal Reserve, which plays a critical role in fighting inflation. \n",
"\n",
"My plan will not only lower costs to give families a fair shot, it will lower the deficit.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 19:\n",
"\n",
"Let each of us here tonight in this Chamber send an unmistakable signal to Ukraine and to the world. \n",
"\n",
"Please rise if you are able and show that, Yes, we the United States of America stand with the Ukrainian people. \n",
"\n",
"Throughout our history weve learned this lesson when dictators do not pay a price for their aggression they cause more chaos. \n",
"\n",
"They keep moving. \n",
"\n",
"And the costs and the threats to America and the world keep rising.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 20:\n",
"\n",
"Its based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n",
"\n",
"ARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimers, diabetes, and more. \n",
"\n",
"A unity agenda for the nation. \n",
"\n",
"We can do this. \n",
"\n",
"My fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. \n",
"\n",
"In this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things.\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
"To disable this warning, you can either:\n",
"\t- Avoid using `tokenizers` before the fork if possible\n",
"\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
]
}
],
"source": [
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores.faiss import FAISS\n",
"from langchain_huggingface import HuggingFaceEmbeddings\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"documents = TextLoader(\"../../how_to/state_of_the_union.txt\").load()\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=100)\n",
"texts = text_splitter.split_documents(documents)\n",
"retriever = FAISS.from_documents(\n",
" texts, HuggingFaceEmbeddings(model_name=\"all-MiniLM-L6-v2\")\n",
").as_retriever(search_kwargs={\"k\": 20})\n",
"\n",
"query = \"What did the president say about Ketanji Brown Jackson\"\n",
"docs = retriever.invoke(query)\n",
"pretty_print_docs(docs)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reranking with VolcengineRerank\n",
"Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll use the `VolcengineRerank` to rerank the returned results."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Document 1:\n",
"\n",
"One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n",
"\n",
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nations top legal minds, who will continue Justice Breyers legacy of excellence.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 2:\n",
"\n",
"As I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n",
"\n",
"While it often appears that we never agree, that isnt true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice.\n",
"----------------------------------------------------------------------------------------------------\n",
"Document 3:\n",
"\n",
"We cannot let this happen. \n",
"\n",
"Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while youre at it, pass the Disclose Act so Americans can know who is funding our elections. \n",
"\n",
"Tonight, Id like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.\n"
]
}
],
"source": [
"from langchain.retrievers import ContextualCompressionRetriever\n",
"from langchain_community.document_compressors.volcengine_rerank import VolcengineRerank\n",
"\n",
"compressor = VolcengineRerank()\n",
"compression_retriever = ContextualCompressionRetriever(\n",
" base_compressor=compressor, base_retriever=retriever\n",
")\n",
"\n",
"compressed_docs = compression_retriever.invoke(\n",
" \"What did the president say about Ketanji Jackson Brown\"\n",
")\n",
"pretty_print_docs(compressed_docs)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -90,7 +90,9 @@
"- `voyage-code-2`\n",
"- `voyage-2`\n",
"- `voyage-law-2`\n",
"- `voyage-lite-02-instruct`"
"- `voyage-lite-02-instruct`\n",
"- `voyage-finance-2`\n",
"- `voyage-multilingual-2`"
]
},
{
@@ -336,7 +338,10 @@
"metadata": {},
"source": [
"## Doing reranking with VoyageAIRerank\n",
"Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll use the Voyage AI reranker to rerank the returned results."
"Now let's wrap our base retriever with a `ContextualCompressionRetriever`. We'll use the Voyage AI reranker to rerank the returned results. You can use any of the following Reranking models: ([source](https://docs.voyageai.com/docs/reranker)):\n",
"\n",
"- `rerank-1`\n",
"- `rerank-lite-1`"
]
},
{

View File

@@ -164,10 +164,10 @@
"text": [
"Node properties:\n",
"- **Movie**\n",
" - `runtime: INTEGER` Min: 120, Max: 120\n",
" - `name: STRING` Available options: ['Top Gun']\n",
" - `runtime`: INTEGER Min: 120, Max: 120\n",
" - `name`: STRING Available options: ['Top Gun']\n",
"- **Actor**\n",
" - `name: STRING` Available options: ['Tom Cruise', 'Val Kilmer', 'Anthony Edwards', 'Meg Ryan']\n",
" - `name`: STRING Available options: ['Tom Cruise', 'Val Kilmer', 'Anthony Edwards', 'Meg Ryan']\n",
"Relationship properties:\n",
"\n",
"The relationships:\n",
@@ -225,7 +225,7 @@
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -234,7 +234,7 @@
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Anthony Edwards, Meg Ryan, Val Kilmer, Tom Cruise played in Top Gun.'}"
" 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}"
]
},
"execution_count": 8,
@@ -286,7 +286,7 @@
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -295,7 +295,7 @@
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Anthony Edwards, Meg Ryan played in Top Gun.'}"
" 'result': 'Tom Cruise, Val Kilmer played in Top Gun.'}"
]
},
"execution_count": 10,
@@ -346,11 +346,11 @@
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"Intermediate steps: [{'query': \"MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\\nWHERE m.name = 'Top Gun'\\nRETURN a.name\"}, {'context': [{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]}]\n",
"Final answer: Anthony Edwards, Meg Ryan, Val Kilmer, Tom Cruise played in Top Gun.\n"
"Intermediate steps: [{'query': \"MATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\\nWHERE m.name = 'Top Gun'\\nRETURN a.name\"}, {'context': [{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]}]\n",
"Final answer: Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.\n"
]
}
],
@@ -406,10 +406,10 @@
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': [{'a.name': 'Anthony Edwards'},\n",
" {'a.name': 'Meg Ryan'},\n",
" 'result': [{'a.name': 'Tom Cruise'},\n",
" {'a.name': 'Val Kilmer'},\n",
" {'a.name': 'Tom Cruise'}]}"
" {'a.name': 'Anthony Edwards'},\n",
" {'a.name': 'Meg Ryan'}]}"
]
},
"execution_count": 14,
@@ -482,7 +482,7 @@
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (:Movie {name:\"Top Gun\"})<-[:ACTED_IN]-()\n",
"\u001b[32;1m\u001b[1;3mMATCH (m:Movie {name:\"Top Gun\"})<-[:ACTED_IN]-()\n",
"RETURN count(*) AS numberOfActors\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'numberOfActors': 4}]\u001b[0m\n",
@@ -494,7 +494,7 @@
"data": {
"text/plain": [
"{'query': 'How many people played in Top Gun?',\n",
" 'result': 'There were 4 actors who played in Top Gun.'}"
" 'result': 'There were 4 actors in Top Gun.'}"
]
},
"execution_count": 16,
@@ -548,7 +548,7 @@
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -557,7 +557,7 @@
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Anthony Edwards, Meg Ryan, Val Kilmer, and Tom Cruise played in Top Gun.'}"
" 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}"
]
},
"execution_count": 18,
@@ -661,7 +661,7 @@
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Tom Cruise'}]\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -670,7 +670,7 @@
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'Anthony Edwards, Meg Ryan, Val Kilmer, Tom Cruise played in Top Gun.'}"
" 'result': 'Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan played in Top Gun.'}"
]
},
"execution_count": 22,
@@ -683,12 +683,116 @@
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3fa3f3d5-f7e7-4ca9-8f07-ca22b897f192",
"cell_type": "markdown",
"id": "81093062-eb7f-4d96-b1fd-c36b8f1b9474",
"metadata": {},
"outputs": [],
"source": []
"source": [
"## Provide context from database results as tool/function output\n",
"\n",
"You can use the `use_function_response` parameter to pass context from database results to an LLM as a tool/function output. This method improves the response accuracy and relevance of an answer as the LLM follows the provided context more closely.\n",
"_You will need to use an LLM with native function calling support to use this feature_."
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "2be8f51c-e80a-4a60-ab1c-266450fc17cd",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': 'The main actors in Top Gun are Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan.'}"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo\"),\n",
" graph=graph,\n",
" verbose=True,\n",
" use_function_response=True,\n",
")\n",
"chain.invoke({\"query\": \"Who played in Top Gun?\"})"
]
},
{
"cell_type": "markdown",
"id": "48a75785-5bc9-49a7-a41b-88bf3ef9d312",
"metadata": {},
"source": [
"You can provide custom system message when using the function response feature by providing `function_response_system` to instruct the model on how to generate answers.\n",
"\n",
"_Note that `qa_prompt` will have no effect when using `use_function_response`_"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "ddf0a61e-f104-4dbb-abbf-e65f3f57dd9a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
"\u001b[32;1m\u001b[1;3mMATCH (a:Actor)-[:ACTED_IN]->(m:Movie)\n",
"WHERE m.name = 'Top Gun'\n",
"RETURN a.name\u001b[0m\n",
"Full Context:\n",
"\u001b[32;1m\u001b[1;3m[{'a.name': 'Tom Cruise'}, {'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'query': 'Who played in Top Gun?',\n",
" 'result': \"Arrr matey! In the film Top Gun, ye be seein' Tom Cruise, Val Kilmer, Anthony Edwards, and Meg Ryan sailin' the high seas of the sky! Aye, they be a fine crew of actors, they be!\"}"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain = GraphCypherQAChain.from_llm(\n",
" llm=ChatOpenAI(temperature=0, model=\"gpt-3.5-turbo\"),\n",
" graph=graph,\n",
" verbose=True,\n",
" use_function_response=True,\n",
" function_response_system=\"Respond as a pirate!\",\n",
")\n",
"chain.invoke({\"query\": \"Who played in Top Gun?\"})"
]
}
],
"metadata": {

View File

@@ -724,6 +724,83 @@
"llm(\"Tell me joke\")"
]
},
{
"cell_type": "markdown",
"id": "9b2b2777",
"metadata": {},
"source": [
"## `MongoDB Atlas` Cache\n",
"\n",
"[MongoDB Atlas](https://www.mongodb.com/docs/atlas/) is a fully-managed cloud database available in AWS, Azure, and GCP. It has native support for \n",
"Vector Search on the MongoDB document data.\n",
"Use [MongoDB Atlas Vector Search](/docs/integrations/providers/mongodb_atlas) to semantically cache prompts and responses."
]
},
{
"cell_type": "markdown",
"id": "ecdc2a0a",
"metadata": {},
"source": [
"### `MongoDBCache`\n",
"An abstraction to store a simple cache in MongoDB. This does not use Semantic Caching, nor does it require an index to be made on the collection before generation.\n",
"\n",
"To import this cache:\n",
"\n",
"```python\n",
"from langchain_mongodb.cache import MongoDBCache\n",
"```\n",
"\n",
"\n",
"To use this cache with your LLMs:\n",
"```python\n",
"from langchain_core.globals import set_llm_cache\n",
"\n",
"# use any embedding provider...\n",
"from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings\n",
"\n",
"mongodb_atlas_uri = \"<YOUR_CONNECTION_STRING>\"\n",
"COLLECTION_NAME=\"<YOUR_CACHE_COLLECTION_NAME>\"\n",
"DATABASE_NAME=\"<YOUR_DATABASE_NAME>\"\n",
"\n",
"set_llm_cache(MongoDBCache(\n",
" connection_string=mongodb_atlas_uri,\n",
" collection_name=COLLECTION_NAME,\n",
" database_name=DATABASE_NAME,\n",
"))\n",
"```\n",
"\n",
"\n",
"### `MongoDBAtlasSemanticCache`\n",
"Semantic caching allows users to retrieve cached prompts based on semantic similarity between the user input and previously cached results. Under the hood it blends MongoDBAtlas as both a cache and a vectorstore.\n",
"The MongoDBAtlasSemanticCache inherits from `MongoDBAtlasVectorSearch` and needs an Atlas Vector Search Index defined to work. Please look at the [usage example](/docs/integrations/vectorstores/mongodb_atlas) on how to set up the index.\n",
"\n",
"To import this cache:\n",
"```python\n",
"from langchain_mongodb.cache import MongoDBAtlasSemanticCache\n",
"```\n",
"\n",
"To use this cache with your LLMs:\n",
"```python\n",
"from langchain_core.globals import set_llm_cache\n",
"\n",
"# use any embedding provider...\n",
"from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings\n",
"\n",
"mongodb_atlas_uri = \"<YOUR_CONNECTION_STRING>\"\n",
"COLLECTION_NAME=\"<YOUR_CACHE_COLLECTION_NAME>\"\n",
"DATABASE_NAME=\"<YOUR_DATABASE_NAME>\"\n",
"\n",
"set_llm_cache(MongoDBAtlasSemanticCache(\n",
" embedding=FakeEmbeddings(),\n",
" connection_string=mongodb_atlas_uri,\n",
" collection_name=COLLECTION_NAME,\n",
" database_name=DATABASE_NAME,\n",
"))\n",
"```\n",
"\n",
"To find more resources about using MongoDBSemanticCache visit [here](https://www.mongodb.com/blog/post/introducing-semantic-caching-dedicated-mongodb-lang-chain-package-gen-ai-apps)"
]
},
{
"cell_type": "markdown",
"id": "726fe754",
@@ -993,7 +1070,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"CASSANDRA_KEYSPACE = demo_keyspace\n"
@@ -1029,7 +1106,7 @@
"metadata": {},
"outputs": [
{
"name": "stdin",
"name": "stdout",
"output_type": "stream",
"text": [
"ASTRA_DB_ID = 01234567-89ab-cdef-0123-456789abcdef\n",
@@ -2071,6 +2148,71 @@
"# so it uses the cached result!\n",
"llm(\"Tell me one joke\")"
]
},
{
"cell_type": "markdown",
"id": "ae1f5e1c-085e-4998-9f2d-b5867d2c3d5b",
"metadata": {
"execution": {
"iopub.execute_input": "2024-05-31T17:18:43.345495Z",
"iopub.status.busy": "2024-05-31T17:18:43.345015Z",
"iopub.status.idle": "2024-05-31T17:18:43.351003Z",
"shell.execute_reply": "2024-05-31T17:18:43.350073Z",
"shell.execute_reply.started": "2024-05-31T17:18:43.345456Z"
}
},
"source": [
"## Cache classes: summary table"
]
},
{
"cell_type": "markdown",
"id": "65072e45-10bc-40f1-979b-2617656bbbce",
"metadata": {
"execution": {
"iopub.execute_input": "2024-05-31T17:16:05.616430Z",
"iopub.status.busy": "2024-05-31T17:16:05.616221Z",
"iopub.status.idle": "2024-05-31T17:16:05.624164Z",
"shell.execute_reply": "2024-05-31T17:16:05.623673Z",
"shell.execute_reply.started": "2024-05-31T17:16:05.616418Z"
}
},
"source": [
"**Cache** classes are implemented by inheriting the [BaseCache](https://api.python.langchain.com/en/latest/caches/langchain_core.caches.BaseCache.html) class.\n",
"\n",
"This table lists all 20 derived classes with links to the API Reference.\n",
"\n",
"\n",
"| Namespace 🔻 | Class |\n",
"|------------|---------|\n",
"| langchain_astradb.cache | [AstraDBCache](https://api.python.langchain.com/en/latest/cache/langchain_astradb.cache.AstraDBCache.html) |\n",
"| langchain_astradb.cache | [AstraDBSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_astradb.cache.AstraDBSemanticCache.html) |\n",
"| langchain_community.cache | [AstraDBCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.AstraDBCache.html) |\n",
"| langchain_community.cache | [AstraDBSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.AstraDBSemanticCache.html) |\n",
"| langchain_community.cache | [AzureCosmosDBSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.AzureCosmosDBSemanticCache.html) |\n",
"| langchain_community.cache | [CassandraCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.CassandraCache.html) |\n",
"| langchain_community.cache | [CassandraSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.CassandraSemanticCache.html) |\n",
"| langchain_community.cache | [GPTCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.GPTCache.html) |\n",
"| langchain_community.cache | [InMemoryCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.InMemoryCache.html) |\n",
"| langchain_community.cache | [MomentoCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.MomentoCache.html) |\n",
"| langchain_community.cache | [OpenSearchSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.OpenSearchSemanticCache.html) |\n",
"| langchain_community.cache | [RedisSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.RedisSemanticCache.html) |\n",
"| langchain_community.cache | [SQLAlchemyCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.SQLAlchemyCache.html) |\n",
"| langchain_community.cache | [SQLAlchemyMd5Cache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.SQLAlchemyMd5Cache.html) |\n",
"| langchain_community.cache | [UpstashRedisCache](https://api.python.langchain.com/en/latest/cache/langchain_community.cache.UpstashRedisCache.html) |\n",
"| langchain_core.caches | [InMemoryCache](https://api.python.langchain.com/en/latest/caches/langchain_core.caches.InMemoryCache.html) |\n",
"| langchain_elasticsearch.cache | [ElasticsearchCache](https://api.python.langchain.com/en/latest/cache/langchain_elasticsearch.cache.ElasticsearchCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBAtlasSemanticCache](https://api.python.langchain.com/en/latest/cache/langchain_mongodb.cache.MongoDBAtlasSemanticCache.html) |\n",
"| langchain_mongodb.cache | [MongoDBCache](https://api.python.langchain.com/en/latest/cache/langchain_mongodb.cache.MongoDBCache.html) |\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "19067f14-c69a-4156-9504-af43a0713669",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -2089,7 +2231,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.12"
}
},
"nbformat": 4,

View File

@@ -12,6 +12,17 @@
"This example goes over how to use LangChain to interact with Aleph Alpha models"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "84483bd5",
"metadata": {},
"outputs": [],
"source": [
"# Installing the langchain package needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -9,6 +9,16 @@
">[Machine Learning Platform for AI of Alibaba Cloud](https://www.alibabacloud.com/help/en/pai) is a machine learning or deep learning engineering platform intended for enterprises and developers. It provides easy-to-use, cost-effective, high-performance, and easy-to-scale plug-ins that can be applied to various industry scenarios. With over 140 built-in optimization algorithms, `Machine Learning Platform for AI` provides whole-process AI engineering capabilities including data labeling (`PAI-iTAG`), model building (`PAI-Designer` and `PAI-DSW`), model training (`PAI-DLC`), compilation optimization, and inference deployment (`PAI-EAS`). `PAI-EAS` supports different types of hardware resources, including CPUs and GPUs, and features high throughput and low latency. It allows you to deploy large-scale complex models with a few clicks and perform elastic scale-ins and scale-outs in real time. It also provides a comprehensive O&M and monitoring system."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 8,

View File

@@ -16,6 +16,16 @@
">`API Gateway` handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, CORS support, authorization >and access control, throttling, monitoring, and API version management. `API Gateway` has no minimum fees or startup costs. You pay for the API calls you receive and the amount of data >transferred out and, with the `API Gateway` tiered pricing model, you can reduce your cost as your API usage scales."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -3,10 +3,15 @@
{
"cell_type": "raw",
"id": "602a52a4",
"metadata": {},
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_label: Anthropic\n",
"sidebar_class_name: hidden\n",
"---"
]
},
@@ -17,9 +22,13 @@
"source": [
"# AnthropicLLM\n",
"\n",
"This example goes over how to use LangChain to interact with `Anthropic` models.\n",
":::caution\n",
"You are currently on a page documenting the use of Anthropic legacy Claude 2 models as [text completion models](/docs/concepts/#llms). The latest and most popular Anthropic models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"NOTE: AnthropicLLM only supports legacy Claude 2 models. To use the newest Claude 3 models, please use [`ChatAnthropic`](/docs/integrations/chat/anthropic) instead.\n",
"You are probably looking for [this page instead](/docs/integrations/chat/anthropic/).\n",
":::\n",
"\n",
"This example goes over how to use LangChain to interact with `Anthropic` models.\n",
"\n",
"## Installation"
]

View File

@@ -12,6 +12,17 @@
"This example goes over how to use LangChain to interact with [Anyscale Endpoint](https://app.endpoints.anyscale.com/). "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "134bd228",
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -18,6 +18,17 @@
"To use, you should have the `aphrodite-engine` python package installed."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4dba1074",
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -8,6 +8,16 @@
"This notebook demonstrates how to use the `Arcee` class for generating text using Arcee's Domain Adapted Language Models (DALMs)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -11,6 +11,16 @@
"This notebook goes over how to use an LLM hosted on an `Azure ML Online Endpoint`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -7,7 +7,13 @@
"source": [
"# Azure OpenAI\n",
"\n",
"This notebook goes over how to use Langchain with [Azure OpenAI](https://aka.ms/azure-openai).\n",
":::caution\n",
"You are currently on a page documenting the use of Azure OpenAI [text completion models](/docs/concepts/#llms). The latest and most popular Azure OpenAI models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"Unless you are specifically using `gpt-3.5-turbo-instruct`, you are probably looking for [this page instead](/docs/integrations/chat/azure_chat_openai/).\n",
":::\n",
"\n",
"This page goes over how to use LangChain with [Azure OpenAI](https://aka.ms/azure-openai).\n",
"\n",
"The Azure OpenAI API is compatible with OpenAI's API. The `openai` Python package makes it easy to use both OpenAI and Azure OpenAI. You can call Azure OpenAI the same way you call OpenAI with the exceptions noted below.\n",
"\n",

View File

@@ -8,6 +8,16 @@
"Baichuan Inc. (https://www.baichuan-ai.com/) is a Chinese startup in the era of AGI, dedicated to addressing fundamental human needs: Efficiency, Health, and Happiness."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "markdown",
"metadata": {},

View File

@@ -45,6 +45,16 @@
"- AquilaChat-7B"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": 2,

View File

@@ -12,6 +12,16 @@
"This example goes over how to use LangChain to interact with Banana models"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -45,6 +45,16 @@
"In this example, we'll work with Mistral 7B. [Deploy Mistral 7B here](https://app.baseten.co/explore/mistral_7b_instruct) and follow along with the deployed model's ID, found in the model dashboard."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"##Installing the langchain packages needed to use the integration\n",
"%pip install -qU langchain-community"
]
},
{
"cell_type": "code",
"execution_count": null,

View File

@@ -11,6 +11,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
":::caution\n",
"You are currently on a page documenting the use of Amazon Bedrock models as [text completion models](/docs/concepts/#llms). Many popular models available on Bedrock are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/bedrock/).\n",
":::\n",
"\n",
">[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that offers a choice of \n",
"> high-performing foundation models (FMs) from leading AI companies like `AI21 Labs`, `Anthropic`, `Cohere`, \n",
"> `Meta`, `Stability AI`, and `Amazon` via a single API, along with a broad set of capabilities you need to \n",

View File

@@ -7,6 +7,12 @@
"source": [
"# Cohere\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Cohere models as [text completion models](/docs/concepts/#llms). Many popular Cohere models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/cohere/).\n",
":::\n",
"\n",
">[Cohere](https://cohere.ai/about) is a Canadian startup that provides natural language processing models that help companies improve human-machine interactions.\n",
"\n",
"Head to the [API reference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.cohere.Cohere.html) for detailed documentation of all attributes and methods."
@@ -193,7 +199,7 @@
"id": "39198f7d-6fc8-4662-954a-37ad38c4bec4",
"metadata": {},
"source": [
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
]
},
{

View File

@@ -7,6 +7,12 @@
"source": [
"# Fireworks\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Fireworks models as [text completion models](/docs/concepts/#llms). Many popular Fireworks models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/fireworks/).\n",
":::\n",
"\n",
">[Fireworks](https://app.fireworks.ai/) accelerates product development on generative AI by creating an innovative AI experiment and production platform. \n",
"\n",
"This example goes over how to use LangChain to interact with `Fireworks` models."

View File

@@ -25,6 +25,12 @@
"id": "bead5ede-d9cc-44b9-b062-99c90a10cf40",
"metadata": {},
"source": [
":::caution\n",
"You are currently on a page documenting the use of Google models as [text completion models](/docs/concepts/#llms). Many popular Google models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/google_generative_ai/).\n",
":::\n",
"\n",
"A guide on using [Google Generative AI](https://developers.generativeai.google/) models with Langchain. Note: It's separate from Google Cloud Vertex AI [integration](/docs/integrations/llms/google_vertex_ai_palm)."
]
},

View File

@@ -15,6 +15,12 @@
"source": [
"# Google Cloud Vertex AI\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Google Vertex [text completion models](/docs/concepts/#llms). Many Google models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/google_vertex_ai_palm/).\n",
":::\n",
"\n",
"**Note:** This is separate from the `Google Generative AI` integration, it exposes [Vertex AI Generative API](https://cloud.google.com/vertex-ai/docs/generative-ai/learn/overview) on `Google Cloud`.\n",
"\n",
"VertexAI exposes all foundational models available in google cloud:\n",
@@ -328,7 +334,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language)"
"You can also easily combine with a prompt template for easy structuring of user input. We can do this using [LCEL](/docs/concepts#langchain-expression-language-lcel)"
]
},
{

View File

@@ -6,6 +6,12 @@
"source": [
"# Ollama\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Ollama models as [text completion models](/docs/concepts/#llms). Many popular Ollama models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/ollama/).\n",
":::\n",
"\n",
"[Ollama](https://ollama.ai/) allows you to run open-source large language models, such as Llama 2, locally.\n",
"\n",
"Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. \n",

View File

@@ -7,6 +7,12 @@
"source": [
"# OpenAI\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of OpenAI [text completion models](/docs/concepts/#llms). The latest and most popular OpenAI models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"Unless you are specifically using `gpt-3.5-turbo-instruct`, you are probably looking for [this page instead](/docs/integrations/chat/openai/).\n",
":::\n",
"\n",
"[OpenAI](https://platform.openai.com/docs/introduction) offers a spectrum of models with different levels of power suitable for different tasks.\n",
"\n",
"This example goes over how to use LangChain to interact with `OpenAI` [models](https://platform.openai.com/docs/models)"

View File

@@ -7,6 +7,12 @@
"source": [
"# Together AI\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Together AI models as [text completion models](/docs/concepts/#llms). Many popular Together AI models are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/together/).\n",
":::\n",
"\n",
"[Together AI](https://www.together.ai/) offers an API to query [50+ leading open-source models](https://docs.together.ai/docs/inference-models) in a couple lines of code.\n",
"\n",
"This example goes over how to use LangChain to interact with Together AI models."

View File

@@ -14,6 +14,18 @@ pip install -U langchain-anthropic
You need to set the `ANTHROPIC_API_KEY` environment variable.
You can get an Anthropic API key [here](https://console.anthropic.com/settings/keys)
## Chat Models
### ChatAnthropic
See a [usage example](/docs/integrations/chat/anthropic).
```python
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model='claude-3-opus-20240229')
```
## LLMs
### [Legacy] AnthropicLLM
@@ -28,17 +40,3 @@ from langchain_anthropic import AnthropicLLM
model = AnthropicLLM(model='claude-2.1')
```
## Chat Models
### ChatAnthropic
See a [usage example](/docs/integrations/chat/anthropic).
```python
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model='claude-3-opus-20240229')
```

View File

@@ -2,45 +2,10 @@
All functionality related to [Google Cloud Platform](https://cloud.google.com/) and other `Google` products.
## LLMs
## Chat models
We recommend individual developers to start with Gemini API (`langchain-google-genai`) and move to Vertex AI (`langchain-google-vertexai`) when they need access to commercial support and higher rate limits. If youre already Cloud-friendly or Cloud-native, then you can get started in Vertex AI straight away.
Please, find more information [here](https://ai.google.dev/gemini-api/docs/migrate-to-cloud).
### Google Generative AI
Access GoogleAI `Gemini` models such as `gemini-pro` and `gemini-pro-vision` through the `GoogleGenerativeAI` class.
Install python package.
```bash
pip install langchain-google-genai
```
See a [usage example](/docs/integrations/llms/google_ai).
```python
from langchain_google_genai import GoogleGenerativeAI
```
### Vertex AI Model Garden
Access `PaLM` and hundreds of OSS models via `Vertex AI Model Garden` service.
We need to install `langchain-google-vertexai` python package.
```bash
pip install langchain-google-vertexai
```
See a [usage example](/docs/integrations/llms/google_vertex_ai_palm#vertex-model-garden).
```python
from langchain_google_vertexai import VertexAIModelGarden
```
## Chat models
Please see [here](https://ai.google.dev/gemini-api/docs/migrate-to-cloud) for more information.
### Google Generative AI
@@ -107,6 +72,40 @@ See a [usage example](/docs/integrations/chat/google_vertex_ai_palm).
from langchain_google_vertexai import ChatVertexAI
```
## LLMs
### Google Generative AI
Access GoogleAI `Gemini` models such as `gemini-pro` and `gemini-pro-vision` through the `GoogleGenerativeAI` class.
Install python package.
```bash
pip install langchain-google-genai
```
See a [usage example](/docs/integrations/llms/google_ai).
```python
from langchain_google_genai import GoogleGenerativeAI
```
### Vertex AI Model Garden
Access `PaLM` and hundreds of OSS models via `Vertex AI Model Garden` service.
We need to install `langchain-google-vertexai` python package.
```bash
pip install langchain-google-vertexai
```
See a [usage example](/docs/integrations/llms/google_vertex_ai_palm#vertex-model-garden).
```python
from langchain_google_vertexai import VertexAIModelGarden
```
## Embedding models
### Google Generative AI Embeddings

View File

@@ -24,6 +24,7 @@ These providers have standalone `langchain-{provider}` packages for improved ver
- [Anthropic](/docs/integrations/platforms/anthropic)
- [Astra DB](/docs/integrations/providers/astradb)
- [Cohere](/docs/integrations/providers/cohere)
- [Couchbase](/docs/integrations/providers/couchbase)
- [Elasticsearch](/docs/integrations/providers/elasticsearch)
- [Exa Search](/docs/integrations/providers/exa_search)
- [Fireworks](/docs/integrations/providers/fireworks)

View File

@@ -6,24 +6,6 @@ keywords: [azure]
All functionality related to `Microsoft Azure` and other `Microsoft` products.
## LLMs
### Azure ML
See a [usage example](/docs/integrations/llms/azure_ml).
```python
from langchain_community.llms.azureml_endpoint import AzureMLOnlineEndpoint
```
### Azure OpenAI
See a [usage example](/docs/integrations/llms/azure_openai).
```python
from langchain_openai import AzureOpenAI
```
## Chat Models
### Azure OpenAI
@@ -51,6 +33,24 @@ See a [usage example](/docs/integrations/chat/azure_chat_openai)
from langchain_openai import AzureChatOpenAI
```
## LLMs
### Azure ML
See a [usage example](/docs/integrations/llms/azure_ml).
```python
from langchain_community.llms.azureml_endpoint import AzureMLOnlineEndpoint
```
### Azure OpenAI
See a [usage example](/docs/integrations/llms/azure_openai).
```python
from langchain_openai import AzureOpenAI
```
## Embedding Models
### Azure OpenAI
@@ -225,7 +225,7 @@ from langchain_community.document_loaders.onenote import OneNoteLoader
## Vector stores
### Azure Cosmos DB
### Azure Cosmos DB MongoDB vCore
>[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/) makes it easy to create a database with full native MongoDB support.
> You can apply your MongoDB experience and continue to use your favorite MongoDB drivers, SDKs, and tools by pointing your application to the API for MongoDB vCore account's connection string.
@@ -255,6 +255,38 @@ See a [usage example](/docs/integrations/vectorstores/azure_cosmos_db).
from langchain_community.vectorstores import AzureCosmosDBVectorSearch
```
### Azure Cosmos DB NoSQL
>[Azure Cosmos DB for NoSQL](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/vector-search) now offers vector indexing and search in preview.
This feature is designed to handle high-dimensional vectors, enabling efficient and accurate vector search at any scale. You can now store vectors
directly in the documents alongside your data. This means that each document in your database can contain not only traditional schema-free data,
but also high-dimensional vectors as other properties of the documents. This colocation of data and vectors allows for efficient indexing and searching,
as the vectors are stored in the same logical unit as the data they represent. This simplifies data management, AI application architectures, and the
efficiency of vector-based operations.
#### Installation and Setup
See [detail configuration instructions](/docs/integrations/vectorstores/azure_cosmos_db_no_sql).
We need to install `azure-cosmos` python package.
```bash
pip install azure-cosmos
```
#### Deploy Azure Cosmos DB on Microsoft Azure
Azure Cosmos DB offers a solution for modern apps and intelligent workloads by being very responsive with dynamic and elastic autoscale. It is available
in every Azure region and can automatically replicate data closer to users. It has SLA guaranteed low-latency and high availability.
[Sign Up](https://learn.microsoft.com/en-us/azure/cosmos-db/nosql/quickstart-python?pivots=devcontainer-codespace) for free to get started today.
See a [usage example](/docs/integrations/vectorstores/azure_cosmos_db_no_sql).
```python
from langchain_community.vectorstores import AzureCosmosDBNoSQLVectorSearch
```
## Retrievers
### Azure AI Search

View File

@@ -25,6 +25,19 @@ pip install langchain-openai
Get an OpenAI api key and set it as an environment variable (`OPENAI_API_KEY`)
## Chat model
See a [usage example](/docs/integrations/chat/openai).
```python
from langchain_openai import ChatOpenAI
```
If you are using a model hosted on `Azure`, you should use different wrapper for that:
```python
from langchain_openai import AzureChatOpenAI
```
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/chat/azure_chat_openai).
## LLM
@@ -38,21 +51,7 @@ If you are using a model hosted on `Azure`, you should use different wrapper for
```python
from langchain_openai import AzureOpenAI
```
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/llms/azure_openai)
## Chat model
See a [usage example](/docs/integrations/chat/openai).
```python
from langchain_openai import ChatOpenAI
```
If you are using a model hosted on `Azure`, you should use different wrapper for that:
```python
from langchain_openai import AzureChatOpenAI
```
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/chat/azure_chat_openai)
For a more detailed walkthrough of the `Azure` wrapper, see [here](/docs/integrations/llms/azure_openai).
## Embedding Model

View File

@@ -6,12 +6,19 @@
## Installation and Setup
We have to install the `couchbase`package.
We have to install the `langchain-couchbase` package.
```bash
pip install couchbase
pip install langchain-couchbase
```
## Vector Store
See a [usage example](/docs/integrations/vectorstores/couchbase).
```python
from langchain_couchbase import CouchbaseVectorStore
```
## Document loader

View File

@@ -8,7 +8,7 @@ Vearch Python SDK enables vearch to use locally. Vearch python sdk can be instal
# Vectorstore
Vearch also can used as vectorstore. Most detalis in [this notebook](/docs/integrations/vectorstores/vearch)
Vearch also can used as vectorstore. Most details in [this notebook](/docs/integrations/vectorstores/vearch)
```python
from langchain_community.vectorstores import Vearch

View File

@@ -22,7 +22,7 @@
"outputs": [],
"source": [
"# Please ensure that this connector is installed in your working environment.\n",
"%pip install gpudb==7.2.0.1"
"%pip install gpudb==7.2.0.9"
]
},
{

File diff suppressed because one or more lines are too long

View File

@@ -33,7 +33,9 @@
"- `voyage-code-2`\n",
"- `voyage-2`\n",
"- `voyage-law-2`\n",
"- `voyage-large-2-instruct`"
"- `voyage-large-2-instruct`\n",
"- `voyage-finance-2`\n",
"- `voyage-multilingual-2`"
]
},
{

View File

@@ -42,14 +42,16 @@
"source": [
"from langchain.indexes import VectorstoreIndexCreator\n",
"from langchain_community.utilities import ApifyWrapper\n",
"from langchain_core.documents import Document"
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAI\n",
"from langchain_openai.embeddings import OpenAIEmbeddings"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Initialize it using your [Apify API token](https://console.apify.com/account/integrations) and for the purpose of this example, also with your OpenAI API key:"
"Initialize it using your [Apify API token](https://docs.apify.com/platform/integrations/api#api-token) and for the purpose of this example, also with your OpenAI API key:"
]
},
{
@@ -103,7 +105,7 @@
"metadata": {},
"outputs": [],
"source": [
"index = VectorstoreIndexCreator().from_loaders([loader])"
"index = VectorstoreIndexCreator(embedding=OpenAIEmbeddings()).from_loaders([loader])"
]
},
{
@@ -120,7 +122,7 @@
"outputs": [],
"source": [
"query = \"What is LangChain?\"\n",
"result = index.query_with_sources(query)"
"result = index.query_with_sources(query, llm=OpenAI())"
]
},
{
@@ -160,7 +162,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
"version": "3.11.3"
}
},
"nbformat": 4,

View File

@@ -527,9 +527,54 @@
"## 6. Create Additional Chain Components\n",
"As usual, declare the other parts of the chain. In this case, it's just a prompt template and an LLM.\n",
"\n",
"You can use any [LangChain compatible LLM](https://python.langchain.com/v0.1/docs/integrations/llms/) in the chain. In this example, we use a [Mixtral8x7b NIM from NVIDIA](https://python.langchain.com/v0.2/docs/integrations/chat/nvidia_ai_endpoints/). NVIDIA NIMs are supported in LangChain via the `langchain-nvidia-ai-endpoints` package, so you can easily build applications with best in class throughput and latency. \n",
"\n",
"LangChain compatible NVIDIA LLMs from [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) can also be used by following these [instructions](https://python.langchain.com/docs/integrations/chat/nvidia_ai_endpoints). "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7fb27b941602401d91542211134fc71a",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-nvidia-ai-endpoints"
]
},
{
"cell_type": "markdown",
"id": "1744eec9",
"metadata": {},
"source": [
"Follow the [instructions for LangChain](https://python.langchain.com/v0.2/docs/integrations/chat/nvidia_ai_endpoints/) to use NVIDIA NIM in your speech-enabled LangChain application. \n",
"\n",
"Set your key for NVIDIA API catalog, where NIMs are hosted for you to try."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0e37bdab",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"nvapi_key = getpass.getpass(\"NVAPI Key (starts with nvapi-): \")\n",
"assert nvapi_key.startswith(\"nvapi-\"), f\"{nvapi_key[:5]}... is not a valid key\"\n",
"os.environ[\"NVIDIA_API_KEY\"] = nvapi_key"
]
},
{
"cell_type": "markdown",
"id": "c754acb0",
"metadata": {},
"source": [
"Instantiate LLM."
]
},
{
"cell_type": "code",
"execution_count": 7,
@@ -538,10 +583,11 @@
"outputs": [],
"source": [
"from langchain_core.prompts import PromptTemplate\n",
"from langchain_openai import OpenAI\n",
"from langchain_nvidia_ai_endpoints import ChatNVIDIA\n",
"\n",
"prompt = PromptTemplate.from_template(\"{user_input}\")\n",
"llm = OpenAI(openai_api_key=\"sk-xxx\")"
"\n",
"llm = ChatNVIDIA(model=\"mistralai/mixtral-8x7b-instruct-v0.1\")"
]
},
{

View File

@@ -3,11 +3,9 @@
{
"cell_type": "markdown",
"id": "245c0aa70db77606",
"metadata": {
"collapsed": false
},
"metadata": {},
"source": [
"# Azure Cosmos DB\n",
"# Azure Cosmos DB Mongo vCore\n",
"\n",
"This notebook shows you how to leverage this integrated [vector database](https://learn.microsoft.com/en-us/azure/cosmos-db/vector-database) to store documents in collections, create indicies and perform vector search queries using approximate nearest neighbor algorithms such as COS (cosine distance), L2 (Euclidean distance), and IP (inner product) to locate documents close to the query vectors. \n",
" \n",
@@ -22,9 +20,7 @@
{
"cell_type": "markdown",
"id": "8c493e205ce1dda5",
"metadata": {
"collapsed": false
},
"metadata": {},
"source": []
},
{
@@ -35,8 +31,7 @@
"ExecuteTime": {
"end_time": "2024-02-08T18:25:05.278480Z",
"start_time": "2024-02-08T18:24:51.560677Z"
},
"collapsed": false
}
},
"outputs": [
{
@@ -62,8 +57,7 @@
"ExecuteTime": {
"end_time": "2024-02-08T18:25:56.926147Z",
"start_time": "2024-02-08T18:25:56.900087Z"
},
"collapsed": false
}
},
"outputs": [],
"source": [
@@ -78,9 +72,7 @@
{
"cell_type": "markdown",
"id": "f2e66b097c6ce2e3",
"metadata": {
"collapsed": false
},
"metadata": {},
"source": [
"We want to use `OpenAIEmbeddings` so we need to set up our Azure OpenAI API Key alongside other environment variables. "
]
@@ -93,8 +85,7 @@
"ExecuteTime": {
"end_time": "2024-02-08T18:26:06.558294Z",
"start_time": "2024-02-08T18:26:06.550008Z"
},
"collapsed": false
}
},
"outputs": [],
"source": [
@@ -114,9 +105,7 @@
{
"cell_type": "markdown",
"id": "ebaa28c6e2b35063",
"metadata": {
"collapsed": false
},
"metadata": {},
"source": [
"Now, we need to load the documents into the collection, create the index and then run our queries against the index to retrieve matches.\n",
"\n",
@@ -131,8 +120,7 @@
"ExecuteTime": {
"end_time": "2024-02-08T18:27:00.782280Z",
"start_time": "2024-02-08T18:26:47.339151Z"
},
"collapsed": false
}
},
"outputs": [],
"source": [
@@ -172,8 +160,7 @@
"ExecuteTime": {
"end_time": "2024-02-08T18:31:13.486173Z",
"start_time": "2024-02-08T18:30:54.175890Z"
},
"collapsed": false
}
},
"outputs": [
{
@@ -236,8 +223,7 @@
"ExecuteTime": {
"end_time": "2024-02-08T18:31:47.468902Z",
"start_time": "2024-02-08T18:31:46.053602Z"
},
"collapsed": false
}
},
"outputs": [],
"source": [
@@ -254,8 +240,7 @@
"ExecuteTime": {
"end_time": "2024-02-08T18:31:50.982598Z",
"start_time": "2024-02-08T18:31:50.977605Z"
},
"collapsed": false
}
},
"outputs": [
{
@@ -279,9 +264,7 @@
{
"cell_type": "markdown",
"id": "37e4df8c7d7db851",
"metadata": {
"collapsed": false
},
"metadata": {},
"source": [
"Once the documents have been loaded and the index has been created, you can now instantiate the vector store directly and run queries against the index"
]
@@ -294,8 +277,7 @@
"ExecuteTime": {
"end_time": "2024-02-08T18:32:14.299599Z",
"start_time": "2024-02-08T18:32:12.923464Z"
},
"collapsed": false
}
},
"outputs": [
{
@@ -332,8 +314,7 @@
"ExecuteTime": {
"end_time": "2024-02-08T18:32:24.021434Z",
"start_time": "2024-02-08T18:32:22.867658Z"
},
"collapsed": false
}
},
"outputs": [
{
@@ -366,30 +347,28 @@
"cell_type": "code",
"execution_count": null,
"id": "b63c73c7e905001c",
"metadata": {
"collapsed": false
},
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.6"
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,

File diff suppressed because one or more lines are too long

View File

@@ -23,17 +23,17 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"id": "bec8d532-fec7-4dc7-9be3-020aa7bdb01f",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain langchain-openai langchain-community couchbase"
"%pip install --upgrade --quiet langchain langchain-openai langchain-couchbase"
]
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"id": "4a972cbc-bf59-46eb-9b50-e5dc3a69dcf0",
"metadata": {},
"outputs": [],
@@ -59,7 +59,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import CouchbaseVectorStore\n",
"from langchain_couchbase.vectorstores import CouchbaseVectorStore\n",
"from langchain_openai import OpenAIEmbeddings"
]
},

View File

@@ -61,7 +61,7 @@
"source": [
"# Pip install necessary package\n",
"%pip install --upgrade --quiet langchain-openai langchain-community\n",
"%pip install gpudb==7.2.0.1\n",
"%pip install gpudb==7.2.0.9\n",
"%pip install --upgrade --quiet tiktoken"
]
},

View File

@@ -390,4 +390,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -7,29 +7,33 @@
"source": [
"# MongoDB Atlas\n",
"\n",
">[MongoDB Atlas](https://www.mongodb.com/docs/atlas/) is a fully-managed cloud database available in AWS, Azure, and GCP. It now has support for native Vector Search on your MongoDB document data.\n",
"This notebook covers how to MongoDB Atlas vector search in LangChain, using the `langchain-mongodb` package.\n",
"\n",
"You'll need to install `langchain-community` with `pip install -qU langchain-community` to use this integration\n",
">[MongoDB Atlas](https://www.mongodb.com/docs/atlas/) is a fully-managed cloud database available in AWS, Azure, and GCP. It supports native Vector Search and full text search (BM25) on your MongoDB document data.\n",
"\n",
"This notebook shows how to use [MongoDB Atlas Vector Search](https://www.mongodb.com/products/platform/atlas-vector-search) to store your embeddings in MongoDB documents, create a vector search index, and perform KNN search with an approximate nearest neighbor algorithm (`Hierarchical Navigable Small Worlds`). It uses the [$vectorSearch MQL Stage](https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-overview/). \n",
"\n",
"\n",
"To use MongoDB Atlas, you must first deploy a cluster. We have a Forever-Free tier of clusters available. To get started head over to Atlas here: [quick start](https://www.mongodb.com/docs/atlas/getting-started/).\n",
"\n",
" "
">[MongoDB Atlas Vector Search](https://www.mongodb.com/products/platform/atlas-vector-search) allows to store your embeddings in MongoDB documents, create a vector search index, and perform KNN search with an approximate nearest neighbor algorithm (`Hierarchical Navigable Small Worlds`). It uses the [$vectorSearch MQL Stage](https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-overview/). "
]
},
{
"cell_type": "markdown",
"id": "5abfec15",
"id": "359b8e9b",
"metadata": {},
"source": [
"> Note: \n",
"> \n",
">* More documentation can be found at [LangChain-MongoDB site](https://www.mongodb.com/docs/atlas/atlas-vector-search/ai-integrations/langchain/)\n",
">* This feature is Generally Available and ready for production deployments.\n",
">* The langchain version 0.0.305 ([release notes](https://github.com/langchain-ai/langchain/releases/tag/v0.0.305)) introduces the support for $vectorSearch MQL stage, which is available with MongoDB Atlas 6.0.11 and 7.0.2. Users utilizing earlier versions of MongoDB Atlas need to pin their LangChain version to <=0.0.304\n",
"> "
"## Prerequisites\n",
">*An Atlas cluster running MongoDB version 6.0.11, 7.0.2, or later (including RCs).\n",
"\n",
">*An OpenAI API Key. You must have a paid OpenAI account with credits available for API requests.\n",
"\n",
"You'll need to install `langchain-mongodb` to use this integration"
]
},
{
"cell_type": "markdown",
"id": "d899e588",
"metadata": {},
"source": [
"## Setting up MongoDB Atlas Cluster\n",
"To use MongoDB Atlas, you must first deploy a cluster. We have a Forever-Free tier of clusters available. To get started head over to Atlas here: [quick start](https://www.mongodb.com/docs/atlas/getting-started/)."
]
},
{
@@ -37,6 +41,7 @@
"id": "1b5ce18d",
"metadata": {},
"source": [
"## Usage\n",
"In the notebook we will demonstrate how to perform `Retrieval Augmented Generation` (RAG) using MongoDB Atlas, OpenAI and Langchain. We will be performing Similarity Search, Similarity Search with Metadata Pre-Filtering, and Question Answering over the PDF document for [GPT 4 technical report](https://arxiv.org/pdf/2303.08774.pdf) that came out in March 2023 and hence is not part of the OpenAI's Large Language Model(LLM)'s parametric memory, which had a knowledge cutoff of September 2021."
]
},
@@ -76,7 +81,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain pypdf pymongo langchain-openai tiktoken"
"%pip install --upgrade --quiet langchain langchain-mongodb pypdf pymongo langchain-openai tiktoken"
]
},
{
@@ -411,6 +416,18 @@
"source": [
"GPT-4 requires significantly more compute than earlier GPT models. On a dataset derived from OpenAI's internal codebase, GPT-4 requires 100p (petaflops) of compute to reach the lowest loss, while the smaller models require 1-10n (nanoflops)."
]
},
{
"cell_type": "markdown",
"id": "0ac44802",
"metadata": {},
"source": [
"# Other Notes\n",
">* More documentation can be found at [LangChain-MongoDB](https://www.mongodb.com/docs/atlas/atlas-vector-search/ai-integrations/langchain/) site\n",
">* This feature is Generally Available and ready for production deployments.\n",
">* The langchain version 0.0.305 ([release notes](https://github.com/langchain-ai/langchain/releases/tag/v0.0.305)) introduces the support for $vectorSearch MQL stage, which is available with MongoDB Atlas 6.0.11 and 7.0.2. Users utilizing earlier versions of MongoDB Atlas need to pin their LangChain version to <=0.0.304\n",
"> "
]
}
],
"metadata": {

View File

@@ -8,7 +8,7 @@ sidebar_class_name: hidden
**LangChain** is a framework for developing applications powered by large language models (LLMs).
LangChain simplifies every stage of the LLM application lifecycle:
- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts#langchain-expression-language) and [components](/docs/concepts). Hit the ground running using [third-party integrations](/docs/integrations/platforms/) and [Templates](/docs/templates).
- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts#langchain-expression-language-lcel) and [components](/docs/concepts). Hit the ground running using [third-party integrations](/docs/integrations/platforms/) and [Templates](/docs/templates).
- **Productionization**: Use [LangSmith](https://docs.smith.langchain.com/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence.
- **Deployment**: Turn any chain into an API with [LangServe](/docs/langserve).

View File

@@ -383,7 +383,7 @@
"source": [
"Now, we can initalize the agent with the LLM and the tools.\n",
"\n",
"Note that we are passing in the `model`, not `model_with_tools`. That is because `create_tool_calling_executor` will call `.bind_tools` for us under the hood."
"Note that we are passing in the `model`, not `model_with_tools`. That is because `create_react_agent` will call `.bind_tools` for us under the hood."
]
},
{

View File

@@ -737,7 +737,7 @@
"id": "07dcb968-ed9a-458a-85e1-528cd28c6965",
"metadata": {},
"source": [
"Tools are LangChain [Runnables](/docs/concepts#langchain-expression-language), and implement the usual interface:"
"Tools are LangChain [Runnables](/docs/concepts#langchain-expression-language-lcel), and implement the usual interface:"
]
},
{

View File

@@ -667,7 +667,7 @@
"id": "4516200c",
"metadata": {},
"source": [
"Well use the [LCEL Runnable](/docs/concepts#langchain-expression-language)\n",
"Well use the [LCEL Runnable](/docs/concepts#langchain-expression-language-lcel)\n",
"protocol to define the chain, allowing us to \n",
"\n",
"- pipe together components and functions in a transparent way \n",
@@ -718,7 +718,7 @@
"source": [
"Let's dissect the LCEL to understand what's going on.\n",
"\n",
"First: each of these components (`retriever`, `prompt`, `llm`, etc.) are instances of [Runnable](/docs/concepts#langchain-expression-language). This means that they implement the same methods-- such as sync and async `.invoke`, `.stream`, or `.batch`-- which makes them easier to connect together. They can be connected into a [RunnableSequence](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableSequence.html)-- another Runnable-- via the `|` operator.\n",
"First: each of these components (`retriever`, `prompt`, `llm`, etc.) are instances of [Runnable](/docs/concepts#langchain-expression-language-lcel). This means that they implement the same methods-- such as sync and async `.invoke`, `.stream`, or `.batch`-- which makes them easier to connect together. They can be connected into a [RunnableSequence](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableSequence.html)-- another Runnable-- via the `|` operator.\n",
"\n",
"LangChain will automatically cast certain objects to runnables when met with the `|` operator. Here, `format_docs` is cast to a [RunnableLambda](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html), and the dict with `\"context\"` and `\"question\"` is cast to a [RunnableParallel](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableParallel.html). The details are less important than the bigger point, which is that each object is a Runnable.\n",
"\n",

View File

@@ -120,7 +120,7 @@
"\n",
"## Chains {#chains}\n",
"\n",
"Chains (i.e., compositions of LangChain [Runnables](/docs/concepts#langchain-expression-language)) support applications whose steps are predictable. We can create a simple chain that takes a question and does the following:\n",
"Chains (i.e., compositions of LangChain [Runnables](/docs/concepts#langchain-expression-language-lcel)) support applications whose steps are predictable. We can create a simple chain that takes a question and does the following:\n",
"- convert the question into a SQL query;\n",
"- execute the query;\n",
"- use the result to answer the original question.\n",

View File

@@ -0,0 +1,43 @@
import re
import sys
from pathlib import Path
from typing import Union
CURR_DIR = Path(__file__).parent.absolute()
CHAT_MODEL_HEADERS = (
"## Overview",
"### Integration details",
"### Model features",
"## Setup",
"## Instantiation",
"## Invocation",
"## Chaining",
"## API reference",
)
CHAT_MODEL_REGEX = r".*".join(CHAT_MODEL_HEADERS)
def check_chat_model(path: Path) -> None:
with open(path, "r") as f:
doc = f.read()
if not re.search(CHAT_MODEL_REGEX, doc, re.DOTALL):
raise ValueError(
f"Document {path} does not match the ChatModel Integration page template. "
f"Please see https://github.com/langchain-ai/langchain/issues/22296 for "
f"instructions on how to correctly format a ChatModel Integration page."
)
def main(*new_doc_paths: Union[str, Path]) -> None:
for path in new_doc_paths:
path = Path(path).resolve().absolute()
if CURR_DIR.parent / "docs" / "integrations" / "chat" in path.parents:
print(f"Checking chat model page {path}")
check_chat_model(path)
else:
continue
if __name__ == "__main__":
main(*sys.argv[1:])

View File

@@ -18,6 +18,7 @@ CHAT_MODEL_FEAT_TABLE = {
"ChatAnthropic": {
"tool_calling": True,
"structured_output": True,
"multimodal": True,
"package": "langchain-anthropic",
"link": "/docs/integrations/chat/anthropic/",
},
@@ -39,6 +40,7 @@ CHAT_MODEL_FEAT_TABLE = {
"tool_calling": True,
"structured_output": True,
"json_mode": True,
"multimodal": True,
"package": "langchain-openai",
"link": "/docs/integrations/chat/azure_chat_openai/",
},
@@ -46,6 +48,7 @@ CHAT_MODEL_FEAT_TABLE = {
"tool_calling": True,
"structured_output": True,
"json_mode": True,
"multimodal": True,
"package": "langchain-openai",
"link": "/docs/integrations/chat/openai/",
},
@@ -59,11 +62,13 @@ CHAT_MODEL_FEAT_TABLE = {
"ChatVertexAI": {
"tool_calling": True,
"structured_output": True,
"multimodal": True,
"package": "langchain-google-vertexai",
"link": "/docs/integrations/chat/google_vertex_ai_palm/",
},
"ChatGoogleGenerativeAI": {
"tool_calling": True,
"multimodal": True,
"package": "langchain-google-genai",
"link": "/docs/integrations/chat/google_generative_ai/",
},
@@ -107,15 +112,9 @@ CHAT_MODEL_FEAT_TABLE = {
"package": "langchain-community",
"link": "/docs/integrations/chat/edenai/",
},
"ChatLlamaCpp": {
"tool_calling": True,
"structured_output": True,
"local": True,
"package": "langchain-community",
"link": "/docs/integrations/chat/llamacpp",
},
}
LLM_TEMPLATE = """\
---
sidebar_position: 1
@@ -142,8 +141,9 @@ CHAT_MODEL_TEMPLATE = """\
---
sidebar_position: 0
sidebar_class_name: hidden
keywords: [compatibility, bind_tools, tool calling, function calling, structured output, with_structured_output, json mode, local model]
keywords: [compatibility]
custom_edit_url:
hide_table_of_contents: true
---
# Chat models
@@ -219,6 +219,7 @@ def get_chat_model_table() -> str:
"structured_output",
"json_mode",
"local",
"multimodal",
"package",
]
title = [
@@ -227,6 +228,7 @@ def get_chat_model_table() -> str:
"[Structured output](/docs/how_to/structured_output/)",
"JSON mode",
"Local",
"[Multimodal](/docs/how_to/multimodal_inputs/)",
"Package",
]
rows = [title, [":-"] + [":-:"] * (len(title) - 1)]

BIN
docs/static/img/tokenization.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 486 KiB

After

Width:  |  Height:  |  Size: 171 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 486 KiB

After

Width:  |  Height:  |  Size: 184 KiB

View File

@@ -10,7 +10,7 @@ integration_test integration_tests: TEST_FILE = tests/integration_tests/
# unit tests are run with the --disable-socket flag to prevent network calls
test tests:
poetry run pytest --disable-socket --allow-unit-socket $(TEST_FILE)
poetry run pytest --disable-socket --allow-unix-socket $(TEST_FILE)
# integration tests are run without the --disable-socket flag to allow network calls
integration_test integration_tests:

View File

@@ -0,0 +1,210 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"---\n",
"sidebar_label: __ModuleName__\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# __ModuleName__Loader\n",
"\n",
"- TODO: Make sure API reference link is correct.\n",
"\n",
"This notebook provides a quick overview for getting started with __ModuleName__ [document loader](/docs/integrations/document_loaders/). For detailed documentation of all __ModuleName__Loader features and configurations head to the [API reference](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.__module_name___loader.__ModuleName__Loader.html).\n",
"\n",
"- TODO: Add any other relevant links, like information about underlying API, etc.\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"- TODO: Fill in table features.\n",
"- TODO: Remove JS support link if not relevant, otherwise ensure link is correct.\n",
"- TODO: Make sure API reference links are correct.\n",
"\n",
"| Class | Package | Local | Serializable | [JS support](https://js.langchain.com/v0.2/docs/integrations/document_loaders/web_loaders/__module_name___loader)|\n",
"| :--- | :--- | :---: | :---: | :---: |\n",
"| [__ModuleName__Loader](https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.__module_name__loader.__ModuleName__Loader.html) | [langchain_community](https://api.python.langchain.com/en/latest/community_api_reference.html) | ✅/❌ | beta/❌ | ✅/❌ | \n",
"### Loader features\n",
"| Source | Document Lazy Loading | Async Support\n",
"| :---: | :---: | :---: | \n",
"| __ModuleName__Loader | ✅/❌ | ✅/❌ | \n",
"\n",
"## Setup\n",
"\n",
"- TODO: Update with relevant info.\n",
"\n",
"To access __ModuleName__ document loader you'll need to install the `__package_name__` integration package, and create a **ModuleName** account and get an API key.\n",
"\n",
"### Credentials\n",
"\n",
"- TODO: Update with relevant info.\n",
"\n",
"Head to (TODO: link) to sign up to __ModuleName__ and generate an API key. Once you've done this set the __MODULE_NAME___API_KEY environment variable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"__MODULE_NAME___API_KEY\"] = getpass.getpass(\"Enter your __ModuleName__ API key: \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
"# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Installation\n",
"\n",
"Install **langchain_community**.\n",
"\n",
"- TODO: Add any other required packages"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain_community"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and load documents:\n",
"\n",
"- TODO: Update model instantiation with relevant params."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.document_loaders import __ModuleName__Loader\n",
"\n",
"loader = __ModuleName__Loader(\n",
" # required params = ...\n",
" # optional params = ...\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load\n",
"\n",
"- TODO: Run cells to show loading capabilities"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"docs = loader.load()\n",
"docs[0]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(docs[0].metadata)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Lazy Load\n",
"\n",
"- TODO: Run cells to show lazy loading capabilities. Delete if lazy loading is not implemented."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"page = []\n",
"for doc in loader.lazy_load():\n",
" page.append(doc)\n",
" if len(page) >= 10:\n",
" # do some paged operation, e.g.\n",
" # index.upsert(page)\n",
"\n",
" page = []"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## TODO: Any functionality specific to this document loader\n",
"\n",
"E.g. using specific configs for different loading behavior. Delete if not relevant."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all __ModuleName__Loader features and configurations head to the API reference: https://api.python.langchain.com/en/latest/document_loaders/langchain_community.document_loaders.__module_name___loader.__ModuleName__Loader.html"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,71 @@
"""__ModuleName__ document loader."""
from typing import Iterator
from langchain_core.document_loaders.base import BaseLoader
from langchain_core.documents import Document
class __ModuleName__Loader(BaseLoader):
# TODO: Replace all TODOs in docstring. See example docstring:
# https://github.com/langchain-ai/langchain/blob/869523ad728e6b76d77f170cce13925b4ebc3c1e/libs/community/langchain_community/document_loaders/recursive_url_loader.py#L54
"""
__ModuleName__ document loader integration
# TODO: Replace with relevant packages, env vars.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from langchain_community.document_loaders import __ModuleName__Loader
loader = __ModuleName__Loader(
# required params = ...
# other params = ...
)
Lazy load:
.. code-block:: python
docs = []
docs_lazy = loader.lazy_load()
# async variant:
# docs_lazy = await loader.alazy_load()
for doc in docs_lazy:
docs.append(doc)
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
TODO: Example output
# TODO: Delete if async load is not implemented
Async load:
.. code-block:: python
docs = await loader.aload()
print(docs[0].page_content[:100])
print(docs[0].metadata)
.. code-block:: python
TODO: Example output
"""
# TODO: This method must be implemented to load documents.
# Do not implement load(), a default implementation is already available.
def lazy_load(self) -> Iterator[Document]:
raise NotImplementedError()
# TODO: Implement if you would like to change default BaseLoader implementation
# async def alazy_load(self) -> AsyncIterator[Document]:

View File

@@ -153,7 +153,7 @@ def create_doc(
component_type: Annotated[
str,
typer.Option(
help=("The type of component. Currently only 'ChatModel' supported."),
help=("The type of component. Currently only 'ChatModel', 'DocumentLoader' supported."),
),
] = "ChatModel",
destination_dir: Annotated[
@@ -196,7 +196,10 @@ def create_doc(
)
# copy over template from ../integration_template
docs_template = Path(__file__).parents[1] / "integration_template/docs/chat.ipynb"
if component_type == "ChatModel":
docs_template = Path(__file__).parents[1] / "integration_template/docs/chat.ipynb"
elif component_type == "DocumentLoader":
docs_template = Path(__file__).parents[1] / "integration_template/docs/document_loaders.ipynb"
shutil.copy(docs_template, destination_path)
# replacements in file

View File

@@ -1,6 +1,6 @@
[tool.poetry]
name = "langchain-cli"
version = "0.0.24"
version = "0.0.25"
description = "CLI for interacting with LangChain"
authors = ["Erick Friis <erick@langchain.dev>"]
readme = "README.md"

View File

@@ -44,6 +44,7 @@ lint_tests: MYPY_CACHE=.mypy_cache_test
lint lint_diff lint_package lint_tests:
./scripts/check_pydantic.sh .
./scripts/lint_imports.sh
./scripts/check_pickle.sh .
poetry run ruff .
[ "$(PYTHON_FILES)" = "" ] || poetry run ruff format $(PYTHON_FILES) --diff
[ "$(PYTHON_FILES)" = "" ] || poetry run ruff --select I $(PYTHON_FILES)

View File

@@ -80,7 +80,8 @@ timescale-vector==0.0.1
tqdm>=4.48.0
tree-sitter>=0.20.2,<0.21
tree-sitter-languages>=1.8.0,<2
upstash-redis>=0.15.0,<0.16
upstash-redis>=1.1.0,<2
upstash-ratelimit>=1.1.0,<2
vdms==0.0.20
xata>=1.0.0a7,<2
xmltodict>=0.13.0,<0.14

View File

@@ -19,7 +19,7 @@ class ConneryToolkit(BaseToolkit):
"""
return self.tools
@root_validator()
@root_validator(pre=True)
def validate_attributes(cls, values: dict) -> dict:
"""
Validate the attributes of the ConneryToolkit class.

View File

@@ -72,6 +72,10 @@ if TYPE_CHECKING:
from langchain_community.callbacks.trubrics_callback import (
TrubricsCallbackHandler,
)
from langchain_community.callbacks.upstash_ratelimit_callback import (
UpstashRatelimitError,
UpstashRatelimitHandler, # noqa: F401
)
from langchain_community.callbacks.uptrain_callback import (
UpTrainCallbackHandler,
)
@@ -104,6 +108,8 @@ _module_lookup = {
"SageMakerCallbackHandler": "langchain_community.callbacks.sagemaker_callback",
"StreamlitCallbackHandler": "langchain_community.callbacks.streamlit",
"TrubricsCallbackHandler": "langchain_community.callbacks.trubrics_callback",
"UpstashRatelimitError": "langchain_community.callbacks.upstash_ratelimit_callback",
"UpstashRatelimitHandler": "langchain_community.callbacks.upstash_ratelimit_callback", # noqa
"UpTrainCallbackHandler": "langchain_community.callbacks.uptrain_callback",
"WandbCallbackHandler": "langchain_community.callbacks.wandb_callback",
"WhyLabsCallbackHandler": "langchain_community.callbacks.whylabs_callback",
@@ -140,6 +146,8 @@ __all__ = [
"SageMakerCallbackHandler",
"StreamlitCallbackHandler",
"TrubricsCallbackHandler",
"UpstashRatelimitError",
"UpstashRatelimitHandler",
"UpTrainCallbackHandler",
"WandbCallbackHandler",
"WhyLabsCallbackHandler",

View File

@@ -5,6 +5,7 @@ import json
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
List,
Optional,
@@ -14,29 +15,45 @@ from typing import (
Union,
)
from langchain_core.output_parsers.pydantic import PydanticBaseModel
from langchain_core.tracers.base import BaseTracer
from langchain_core.tracers.schemas import Run
if TYPE_CHECKING:
from wandb import Settings as WBSettings
from wandb.sdk.data_types.trace_tree import Span
from wandb.sdk.data_types.trace_tree import Trace
from wandb.sdk.lib.paths import StrPath
from wandb.wandb_run import Run as WBRun
PRINT_WARNINGS = True
def _serialize_io(run_inputs: Optional[dict]) -> dict:
if not run_inputs:
def _serialize_io(run_io: Optional[dict]) -> dict:
"""Utility to serialize the input and output of a run to store in wandb.
Currently, supports serializing pydantic models and protobuf messages.
:param run_io: The inputs and outputs of the run.
:return: The serialized inputs and outputs.
"""
if not run_io:
return {}
from google.protobuf.json_format import MessageToJson
from google.protobuf.message import Message
serialized_inputs = {}
for key, value in run_inputs.items():
for key, value in run_io.items():
if isinstance(value, Message):
serialized_inputs[key] = MessageToJson(value)
elif isinstance(value, PydanticBaseModel):
serialized_inputs[key] = (
value.model_dump_json()
if hasattr(value, "model_dump_json")
else value.json()
)
elif key == "input_documents":
serialized_inputs.update(
{f"input_document_{i}": doc.json() for i, doc in enumerate(value)}
@@ -46,344 +63,186 @@ def _serialize_io(run_inputs: Optional[dict]) -> dict:
return serialized_inputs
class RunProcessor:
"""Handles the conversion of a LangChain Runs into a WBTraceTree."""
def flatten_run(run: Dict[str, Any]) -> List[Dict[str, Any]]:
"""Utility to flatten a nest run object into a list of runs.
:param run: The base run to flatten.
:return: The flattened list of runs.
"""
def __init__(self, wandb_module: Any, trace_module: Any):
self.wandb = wandb_module
self.trace_tree = trace_module
def process_span(self, run: Run) -> Optional["Span"]:
"""Converts a LangChain Run into a W&B Trace Span.
:param run: The LangChain Run to convert.
:return: The converted W&B Trace Span.
"""
try:
span = self._convert_lc_run_to_wb_span(run)
return span
except Exception as e:
if PRINT_WARNINGS:
self.wandb.termwarn(
f"Skipping trace saving - unable to safely convert LangChain Run "
f"into W&B Trace due to: {e}"
)
return None
def _convert_run_to_wb_span(self, run: Run) -> "Span":
"""Base utility to create a span from a run.
:param run: The run to convert.
:return: The converted Span.
"""
attributes = {**run.extra} if run.extra else {}
attributes["execution_order"] = run.execution_order # type: ignore
return self.trace_tree.Span(
span_id=str(run.id) if run.id is not None else None,
name=run.name,
start_time_ms=int(run.start_time.timestamp() * 1000),
end_time_ms=int(run.end_time.timestamp() * 1000)
if run.end_time is not None
else None,
status_code=self.trace_tree.StatusCode.SUCCESS
if run.error is None
else self.trace_tree.StatusCode.ERROR,
status_message=run.error,
attributes=attributes,
)
def _convert_llm_run_to_wb_span(self, run: Run) -> "Span":
"""Converts a LangChain LLM Run into a W&B Trace Span.
:param run: The LangChain LLM Run to convert.
:return: The converted W&B Trace Span.
"""
base_span = self._convert_run_to_wb_span(run)
if base_span.attributes is None:
base_span.attributes = {}
base_span.attributes["llm_output"] = (run.outputs or {}).get("llm_output", {})
base_span.results = [
self.trace_tree.Result(
inputs={"prompt": prompt},
outputs={
f"gen_{g_i}": gen["text"]
for g_i, gen in enumerate(run.outputs["generations"][ndx])
}
if (
run.outputs is not None
and len(run.outputs["generations"]) > ndx
and len(run.outputs["generations"][ndx]) > 0
)
else None,
)
for ndx, prompt in enumerate(run.inputs["prompts"] or [])
]
base_span.span_kind = self.trace_tree.SpanKind.LLM
return base_span
def _convert_chain_run_to_wb_span(self, run: Run) -> "Span":
"""Converts a LangChain Chain Run into a W&B Trace Span.
:param run: The LangChain Chain Run to convert.
:return: The converted W&B Trace Span.
"""
base_span = self._convert_run_to_wb_span(run)
base_span.results = [
self.trace_tree.Result(
inputs=_serialize_io(run.inputs), outputs=_serialize_io(run.outputs)
)
]
base_span.child_spans = [
self._convert_lc_run_to_wb_span(child_run) for child_run in run.child_runs
]
base_span.span_kind = (
self.trace_tree.SpanKind.AGENT
if "agent" in run.name.lower()
else self.trace_tree.SpanKind.CHAIN
)
return base_span
def _convert_tool_run_to_wb_span(self, run: Run) -> "Span":
"""Converts a LangChain Tool Run into a W&B Trace Span.
:param run: The LangChain Tool Run to convert.
:return: The converted W&B Trace Span.
"""
base_span = self._convert_run_to_wb_span(run)
base_span.results = [
self.trace_tree.Result(
inputs=_serialize_io(run.inputs), outputs=_serialize_io(run.outputs)
)
]
base_span.child_spans = [
self._convert_lc_run_to_wb_span(child_run) for child_run in run.child_runs
]
base_span.span_kind = self.trace_tree.SpanKind.TOOL
return base_span
def _convert_lc_run_to_wb_span(self, run: Run) -> "Span":
"""Utility to convert any generic LangChain Run into a W&B Trace Span.
:param run: The LangChain Run to convert.
:return: The converted W&B Trace Span.
"""
if run.run_type == "llm":
return self._convert_llm_run_to_wb_span(run)
elif run.run_type == "chain":
return self._convert_chain_run_to_wb_span(run)
elif run.run_type == "tool":
return self._convert_tool_run_to_wb_span(run)
else:
return self._convert_run_to_wb_span(run)
def process_model(self, run: Run) -> Optional[Dict[str, Any]]:
"""Utility to process a run for wandb model_dict serialization.
:param run: The run to process.
:return: The convert model_dict to pass to WBTraceTree.
"""
try:
data = json.loads(run.json())
processed = self.flatten_run(data)
keep_keys = (
"id",
"name",
"serialized",
"inputs",
"outputs",
"parent_run_id",
"execution_order",
)
processed = self.truncate_run_iterative(processed, keep_keys=keep_keys)
exact_keys, partial_keys = ("lc", "type"), ("api_key",)
processed = self.modify_serialized_iterative(
processed, exact_keys=exact_keys, partial_keys=partial_keys
)
output = self.build_tree(processed)
return output
except Exception as e:
if PRINT_WARNINGS:
self.wandb.termwarn(f"WARNING: Failed to serialize model: {e}")
return None
def flatten_run(self, run: Dict[str, Any]) -> List[Dict[str, Any]]:
"""Utility to flatten a nest run object into a list of runs.
:param run: The base run to flatten.
def flatten(child_runs: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""Utility to recursively flatten a list of child runs in a run.
:param child_runs: The list of child runs to flatten.
:return: The flattened list of runs.
"""
if child_runs is None:
return []
def flatten(child_runs: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
"""Utility to recursively flatten a list of child runs in a run.
:param child_runs: The list of child runs to flatten.
:return: The flattened list of runs.
"""
if child_runs is None:
return []
result = []
for item in child_runs:
child_runs = item.pop("child_runs", [])
result.append(item)
result.extend(flatten(child_runs))
result = []
for item in child_runs:
child_runs = item.pop("child_runs", [])
result.append(item)
result.extend(flatten(child_runs))
return result
return result
return flatten([run])
return flatten([run])
def truncate_run_iterative(
self, runs: List[Dict[str, Any]], keep_keys: Tuple[str, ...] = ()
) -> List[Dict[str, Any]]:
"""Utility to truncate a list of runs dictionaries to only keep the specified
keys in each run.
:param runs: The list of runs to truncate.
:param keep_keys: The keys to keep in each run.
:return: The truncated list of runs.
def truncate_run_iterative(
runs: List[Dict[str, Any]], keep_keys: Tuple[str, ...] = ()
) -> List[Dict[str, Any]]:
"""Utility to truncate a list of runs dictionaries to only keep the specified
keys in each run.
:param runs: The list of runs to truncate.
:param keep_keys: The keys to keep in each run.
:return: The truncated list of runs.
"""
def truncate_single(run: Dict[str, Any]) -> Dict[str, Any]:
"""Utility to truncate a single run dictionary to only keep the specified
keys.
:param run: The run dictionary to truncate.
:return: The truncated run dictionary
"""
new_dict = {}
for key in run:
if key in keep_keys:
new_dict[key] = run.get(key)
return new_dict
def truncate_single(run: Dict[str, Any]) -> Dict[str, Any]:
"""Utility to truncate a single run dictionary to only keep the specified
keys.
:param run: The run dictionary to truncate.
:return: The truncated run dictionary
"""
new_dict = {}
for key in run:
if key in keep_keys:
new_dict[key] = run.get(key)
return new_dict
return list(map(truncate_single, runs))
return list(map(truncate_single, runs))
def modify_serialized_iterative(
self,
runs: List[Dict[str, Any]],
exact_keys: Tuple[str, ...] = (),
partial_keys: Tuple[str, ...] = (),
) -> List[Dict[str, Any]]:
"""Utility to modify the serialized field of a list of runs dictionaries.
removes any keys that match the exact_keys and any keys that contain any of the
partial_keys.
recursively moves the dictionaries under the kwargs key to the top level.
changes the "id" field to a string "_kind" field that tells WBTraceTree how to
visualize the run. promotes the "serialized" field to the top level.
def modify_serialized_iterative(
runs: List[Dict[str, Any]],
exact_keys: Tuple[str, ...] = (),
partial_keys: Tuple[str, ...] = (),
) -> List[Dict[str, Any]]:
"""Utility to modify the serialized field of a list of runs dictionaries.
removes any keys that match the exact_keys and any keys that contain any of the
partial_keys.
recursively moves the dictionaries under the kwargs key to the top level.
changes the "id" field to a string "_kind" field that tells WBTraceTree how to
visualize the run. promotes the "serialized" field to the top level.
:param runs: The list of runs to modify.
:param exact_keys: A tuple of keys to remove from the serialized field.
:param partial_keys: A tuple of partial keys to remove from the serialized
field.
:return: The modified list of runs.
"""
:param runs: The list of runs to modify.
:param exact_keys: A tuple of keys to remove from the serialized field.
:param partial_keys: A tuple of partial keys to remove from the serialized
field.
:return: The modified list of runs.
def remove_exact_and_partial_keys(obj: Dict[str, Any]) -> Dict[str, Any]:
"""Recursively removes exact and partial keys from a dictionary.
:param obj: The dictionary to remove keys from.
:return: The modified dictionary.
"""
if isinstance(obj, dict):
obj = {
k: v
for k, v in obj.items()
if k not in exact_keys
and not any(partial in k for partial in partial_keys)
}
for k, v in obj.items():
obj[k] = remove_exact_and_partial_keys(v)
elif isinstance(obj, list):
obj = [remove_exact_and_partial_keys(x) for x in obj]
return obj
def remove_exact_and_partial_keys(obj: Dict[str, Any]) -> Dict[str, Any]:
"""Recursively removes exact and partial keys from a dictionary.
:param obj: The dictionary to remove keys from.
:return: The modified dictionary.
"""
if isinstance(obj, dict):
obj = {
k: v
for k, v in obj.items()
if k not in exact_keys
and not any(partial in k for partial in partial_keys)
}
for k, v in obj.items():
obj[k] = remove_exact_and_partial_keys(v)
elif isinstance(obj, list):
obj = [remove_exact_and_partial_keys(x) for x in obj]
return obj
def handle_id_and_kwargs(
obj: Dict[str, Any], root: bool = False
) -> Dict[str, Any]:
"""Recursively handles the id and kwargs fields of a dictionary.
changes the id field to a string "_kind" field that tells WBTraceTree how
to visualize the run. recursively moves the dictionaries under the kwargs
key to the top level.
:param obj: a run dictionary with id and kwargs fields.
:param root: whether this is the root dictionary or the serialized
dictionary.
:return: The modified dictionary.
"""
if isinstance(obj, dict):
if ("id" in obj or "name" in obj) and not root:
_kind = obj.get("id")
if not _kind:
_kind = [obj.get("name")]
def handle_id_and_kwargs(obj: Dict[str, Any], root: bool = False) -> Dict[str, Any]:
"""Recursively handles the id and kwargs fields of a dictionary.
changes the id field to a string "_kind" field that tells WBTraceTree how
to visualize the run. recursively moves the dictionaries under the kwargs
key to the top level.
:param obj: a run dictionary with id and kwargs fields.
:param root: whether this is the root dictionary or the serialized
dictionary.
:return: The modified dictionary.
"""
if isinstance(obj, dict):
if "data" in obj and isinstance(obj["data"], dict):
obj = obj["data"]
if ("id" in obj or "name" in obj) and not root:
_kind = obj.get("id")
if not _kind:
_kind = [obj.get("name")]
if isinstance(_kind, list):
obj["_kind"] = _kind[-1]
obj.pop("id", None)
obj.pop("name", None)
if "kwargs" in obj:
kwargs = obj.pop("kwargs")
for k, v in kwargs.items():
obj[k] = v
for k, v in obj.items():
obj[k] = handle_id_and_kwargs(v)
elif isinstance(obj, list):
obj = [handle_id_and_kwargs(x) for x in obj]
return obj
if "kwargs" in obj:
kwargs = obj.pop("kwargs")
for k, v in kwargs.items():
obj[k] = v
for k, v in obj.items():
obj[k] = handle_id_and_kwargs(v)
elif isinstance(obj, list):
obj = [handle_id_and_kwargs(x) for x in obj]
return obj
def transform_serialized(serialized: Dict[str, Any]) -> Dict[str, Any]:
"""Transforms the serialized field of a run dictionary to be compatible
with WBTraceTree.
:param serialized: The serialized field of a run dictionary.
:return: The transformed serialized field.
"""
serialized = handle_id_and_kwargs(serialized, root=True)
serialized = remove_exact_and_partial_keys(serialized)
return serialized
def transform_run(run: Dict[str, Any]) -> Dict[str, Any]:
"""Transforms a run dictionary to be compatible with WBTraceTree.
:param run: The run dictionary to transform.
:return: The transformed run dictionary.
"""
transformed_dict = transform_serialized(run)
serialized = transformed_dict.pop("serialized")
for k, v in serialized.items():
transformed_dict[k] = v
_kind = transformed_dict.get("_kind", None)
name = transformed_dict.pop("name", None)
exec_ord = transformed_dict.pop("execution_order", None)
if not name:
name = _kind
output_dict = {
f"{exec_ord}_{name}": transformed_dict,
}
return output_dict
return list(map(transform_run, runs))
def build_tree(self, runs: List[Dict[str, Any]]) -> Dict[str, Any]:
"""Builds a nested dictionary from a list of runs.
:param runs: The list of runs to build the tree from.
:return: The nested dictionary representing the langchain Run in a tree
structure compatible with WBTraceTree.
def transform_serialized(serialized: Dict[str, Any]) -> Dict[str, Any]:
"""Transforms the serialized field of a run dictionary to be compatible
with WBTraceTree.
:param serialized: The serialized field of a run dictionary.
:return: The transformed serialized field.
"""
id_to_data = {}
child_to_parent = {}
serialized = handle_id_and_kwargs(serialized, root=True)
serialized = remove_exact_and_partial_keys(serialized)
return serialized
for entity in runs:
for key, data in entity.items():
id_val = data.pop("id", None)
parent_run_id = data.pop("parent_run_id", None)
id_to_data[id_val] = {key: data}
if parent_run_id:
child_to_parent[id_val] = parent_run_id
def transform_run(run: Dict[str, Any]) -> Dict[str, Any]:
"""Transforms a run dictionary to be compatible with WBTraceTree.
:param run: The run dictionary to transform.
:return: The transformed run dictionary.
"""
transformed_dict = transform_serialized(run)
for child_id, parent_id in child_to_parent.items():
parent_dict = id_to_data[parent_id]
parent_dict[next(iter(parent_dict))][
next(iter(id_to_data[child_id]))
] = id_to_data[child_id][next(iter(id_to_data[child_id]))]
serialized = transformed_dict.pop("serialized")
for k, v in serialized.items():
transformed_dict[k] = v
root_dict = next(
data for id_val, data in id_to_data.items() if id_val not in child_to_parent
)
_kind = transformed_dict.get("_kind", None)
name = transformed_dict.pop("name", None)
return root_dict
if not name:
name = _kind
output_dict = {
f"{name}": transformed_dict,
}
return output_dict
return list(map(transform_run, runs))
def build_tree(runs: List[Dict[str, Any]]) -> Dict[str, Any]:
"""Builds a nested dictionary from a list of runs.
:param runs: The list of runs to build the tree from.
:return: The nested dictionary representing the langchain Run in a tree
structure compatible with WBTraceTree.
"""
id_to_data = {}
child_to_parent = {}
for entity in runs:
for key, data in entity.items():
id_val = data.pop("id", None)
parent_run_id = data.pop("parent_run_id", None)
id_to_data[id_val] = {key: data}
if parent_run_id:
child_to_parent[id_val] = parent_run_id
for child_id, parent_id in child_to_parent.items():
parent_dict = id_to_data[parent_id]
parent_dict[next(iter(parent_dict))][
next(iter(id_to_data[child_id]))
] = id_to_data[child_id][next(iter(id_to_data[child_id]))]
root_dict = next(
data for id_val, data in id_to_data.items() if id_val not in child_to_parent
)
return root_dict
class WandbRunArgs(TypedDict):
@@ -425,13 +284,20 @@ class WandbTracer(BaseTracer):
_run: Optional[WBRun] = None
_run_args: Optional[WandbRunArgs] = None
def __init__(self, run_args: Optional[WandbRunArgs] = None, **kwargs: Any) -> None:
def __init__(
self,
run_args: Optional[WandbRunArgs] = None,
io_serializer: Callable = _serialize_io,
**kwargs: Any,
) -> None:
"""Initializes the WandbTracer.
Parameters:
run_args: (dict, optional) Arguments to pass to `wandb.init()`. If not
provided, `wandb.init()` will be called with no arguments. Please
refer to the `wandb.init` for more details.
io_serializer: callable A function that serializes the input and outputs
of a run to store in wandb. Defaults to "_serialize_io"
To use W&B to monitor all LangChain activity, add this tracer like any other
LangChain callback:
@@ -457,7 +323,7 @@ class WandbTracer(BaseTracer):
self._trace_tree = trace_tree
self._run_args = run_args
self._ensure_run(should_print_url=(wandb.run is None))
self.run_processor = RunProcessor(self._wandb, self._trace_tree)
self._io_serializer = io_serializer
def finish(self) -> None:
"""Waits for all asynchronous processes to finish and data to upload.
@@ -466,23 +332,6 @@ class WandbTracer(BaseTracer):
"""
self._wandb.finish()
def _log_trace_from_run(self, run: Run) -> None:
"""Logs a LangChain Run to W*B as a W&B Trace."""
self._ensure_run()
root_span = self.run_processor.process_span(run)
model_dict = self.run_processor.process_model(run)
if root_span is None:
return
model_trace = self._trace_tree.WBTraceTree(
root_span=root_span,
model_dict=model_dict,
)
if self._wandb.run is not None:
self._wandb.run.log({"langchain_trace": model_trace})
def _ensure_run(self, should_print_url: bool = False) -> None:
"""Ensures an active W&B run exists.
@@ -508,6 +357,133 @@ class WandbTracer(BaseTracer):
self._wandb.run._label(repo="langchain")
def process_model_dict(self, run: Run) -> Optional[Dict[str, Any]]:
"""Utility to process a run for wandb model_dict serialization.
:param run: The run to process.
:return: The convert model_dict to pass to WBTraceTree.
"""
try:
data = json.loads(run.json())
processed = flatten_run(data)
keep_keys = (
"id",
"name",
"serialized",
"parent_run_id",
)
processed = truncate_run_iterative(processed, keep_keys=keep_keys)
exact_keys, partial_keys = (
("lc", "type", "graph"),
(
"api_key",
"input",
"output",
),
)
processed = modify_serialized_iterative(
processed, exact_keys=exact_keys, partial_keys=partial_keys
)
output = build_tree(processed)
return output
except Exception as e:
if PRINT_WARNINGS:
self._wandb.termerror(f"WARNING: Failed to serialize model: {e}")
return None
def _log_trace_from_run(self, run: Run) -> None:
"""Logs a LangChain Run to W*B as a W&B Trace."""
self._ensure_run()
def create_trace(
run: "Run", parent: Optional["Trace"] = None
) -> Optional["Trace"]:
"""
Create a trace for a given run and its child runs.
Args:
run (Run): The run for which to create a trace.
parent (Optional[Trace]): The parent trace.
If provided, the created trace is added as a child to the parent trace.
Returns:
Optional[Trace]: The created trace.
If an error occurs during the creation of the trace, None is returned.
Raises:
Exception: If an error occurs during the creation of the trace,
no exception is raised and a warning is printed.
"""
def get_metadata_dict(r: "Run") -> Dict[str, Any]:
"""
Extract metadata from a given run.
This function extracts metadata from a given run
and returns it as a dictionary.
Args:
r (Run): The run from which to extract metadata.
Returns:
Dict[str, Any]: A dictionary containing the extracted metadata.
"""
run_dict = json.loads(r.json())
metadata_dict = run_dict.get("metadata", {})
metadata_dict["run_id"] = run_dict.get("id")
metadata_dict["parent_run_id"] = run_dict.get("parent_run_id")
metadata_dict["tags"] = run_dict.get("tags")
metadata_dict["execution_order"] = run_dict.get(
"dotted_order", ""
).count(".")
return metadata_dict
try:
if run.run_type in ["llm", "tool"]:
run_type = run.run_type
elif run.run_type == "chain":
run_type = "agent" if "agent" in run.name.lower() else "chain"
else:
run_type = None
metadata = get_metadata_dict(run)
trace_tree = self._trace_tree.Trace(
name=run.name,
kind=run_type,
status_code="error" if run.error else "success",
start_time_ms=int(run.start_time.timestamp() * 1000)
if run.start_time is not None
else None,
end_time_ms=int(run.end_time.timestamp() * 1000)
if run.end_time is not None
else None,
metadata=metadata,
inputs=self._io_serializer(run.inputs),
outputs=self._io_serializer(run.outputs),
)
# If the run has child runs, recursively create traces for them
for child_run in run.child_runs:
create_trace(child_run, trace_tree)
if parent is None:
return trace_tree
else:
parent.add_child(trace_tree)
return parent
except Exception as e:
if PRINT_WARNINGS:
self._wandb.termwarn(
f"WARNING: Failed to serialize trace for run due to: {e}"
)
return None
run_trace = create_trace(run)
model_dict = self.process_model_dict(run)
if model_dict is not None and run_trace is not None:
run_trace._model_dict = model_dict
if self._wandb.run is not None and run_trace is not None:
run_trace.log("langchain_trace")
def _persist_run(self, run: "Run") -> None:
"""Persist a run."""
self._log_trace_from_run(run)

View File

@@ -0,0 +1,206 @@
"""Ratelimiting Handler to limit requests or tokens"""
import logging
from typing import Any, Dict, List, Literal, Optional
from langchain_core.callbacks import BaseCallbackHandler
from langchain_core.outputs import LLMResult
logger = logging.getLogger(__name__)
try:
from upstash_ratelimit import Ratelimit
except ImportError:
Ratelimit = None
class UpstashRatelimitError(Exception):
"""
Upstash Ratelimit Error
Raised when the rate limit is reached in `UpstashRatelimitHandler`
"""
def __init__(
self,
message: str,
type: Literal["token", "request"],
limit: Optional[int] = None,
reset: Optional[float] = None,
):
"""
Args:
message (str): error message
type (str): The kind of the limit which was reached. One of
"token" or "request"
limit (Optional[int]): The limit which was reached. Passed when type
is request
reset (Optional[int]): unix timestamp in milliseconds when the limits
are reset. Passed when type is request
"""
# Call the base class constructor with the parameters it needs
super().__init__(message)
self.type = type
self.limit = limit
self.reset = reset
class UpstashRatelimitHandler(BaseCallbackHandler):
"""
Callback to handle rate limiting based on the number of requests
or the number of tokens in the input.
It uses Upstash Ratelimit to track the ratelimit which utilizes
Upstash Redis to track the state.
Should not be passed to the chain when initialising the chain.
This is because the handler has a state which should be fresh
every time invoke is called. Instead, initialise and pass a handler
every time you invoke.
"""
raise_error = True
_checked: bool = False
def __init__(
self,
identifier: str,
*,
token_ratelimit: Optional[Ratelimit] = None,
request_ratelimit: Optional[Ratelimit] = None,
include_output_tokens: bool = False,
):
"""
Creates UpstashRatelimitHandler. Must be passed an identifier to
ratelimit like a user id or an ip address.
Additionally, it must be passed at least one of token_ratelimit
or request_ratelimit parameters.
Args:
identifier Union[int, str]: the identifier
token_ratelimit Optional[Ratelimit]: Ratelimit to limit the
number of tokens. Only works with OpenAI models since only
these models provide the number of tokens as information
in their output.
request_ratelimit Optional[Ratelimit]: Ratelimit to limit the
number of requests
include_output_tokens bool: Whether to count output tokens when
rate limiting based on number of tokens. Only used when
`token_ratelimit` is passed. False by default.
Example:
.. code-block:: python
from upstash_redis import Redis
from upstash_ratelimit import Ratelimit, FixedWindow
redis = Redis.from_env()
ratelimit = Ratelimit(
redis=redis,
# fixed window to allow 10 requests every 10 seconds:
limiter=FixedWindow(max_requests=10, window=10),
)
user_id = "foo"
handler = UpstashRatelimitHandler(
identifier=user_id,
request_ratelimit=ratelimit
)
# Initialize a simple runnable to test
chain = RunnableLambda(str)
# pass handler as callback:
output = chain.invoke(
"input",
config={
"callbacks": [handler]
}
)
"""
if not any([token_ratelimit, request_ratelimit]):
raise ValueError(
"You must pass at least one of input_token_ratelimit or"
" request_ratelimit parameters for handler to work."
)
self.identifier = identifier
self.token_ratelimit = token_ratelimit
self.request_ratelimit = request_ratelimit
self.include_output_tokens = include_output_tokens
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> Any:
"""
Run when chain starts running.
on_chain_start runs multiple times during a chain execution. To make
sure that it's only called once, we keep a bool state `_checked`. If
not `self._checked`, we call limit with `request_ratelimit` and raise
`UpstashRatelimitError` if the identifier is rate limited.
"""
if self.request_ratelimit and not self._checked:
response = self.request_ratelimit.limit(self.identifier)
if not response.allowed:
raise UpstashRatelimitError(
"Request limit reached!", "request", response.limit, response.reset
)
self._checked = True
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""
Run when LLM starts running
"""
if self.token_ratelimit:
remaining = self.token_ratelimit.get_remaining(self.identifier)
if remaining <= 0:
raise UpstashRatelimitError("Token limit reached!", "token")
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""
Run when LLM ends running
If the `include_output_tokens` is set to True, number of tokens
in LLM completion are counted for rate limiting
"""
if self.token_ratelimit:
try:
llm_output = response.llm_output or {}
token_usage = llm_output["token_usage"]
token_count = (
token_usage["total_tokens"]
if self.include_output_tokens
else token_usage["prompt_tokens"]
)
except KeyError:
raise ValueError(
"LLM response doesn't include"
" `token_usage: {total_tokens: int, prompt_tokens: int}`"
" field. To use UpstashRatelimitHandler with token_ratelimit,"
" either use a model which returns token_usage (like "
" OpenAI models) or rate limit only with request_ratelimit."
)
# call limit to add the completion tokens to rate limit
# but don't raise exception since we already generated
# the tokens and would rather continue execution.
self.token_ratelimit.limit(self.identifier, rate=token_count)
def reset(self, identifier: Optional[str] = None) -> "UpstashRatelimitHandler":
"""
Creates a new UpstashRatelimitHandler object with the same
ratelimit configurations but with a new identifier if it's
provided.
Also resets the state of the handler.
"""
return UpstashRatelimitHandler(
identifier=identifier or self.identifier,
token_ratelimit=self.token_ratelimit,
request_ratelimit=self.request_ratelimit,
include_output_tokens=self.include_output_tokens,
)

Some files were not shown because too many files have changed in this diff Show More