Compare commits

...

941 Commits

Author SHA1 Message Date
Bagatur
e89ffb7c92 fmt 2024-03-14 18:22:42 -07:00
Bagatur
c7f9d8b812 fix 2024-03-14 16:51:30 -07:00
Bagatur
b18685ff7b fix 2024-03-14 16:33:50 -07:00
Bagatur
163bb5ee7e fix 2024-03-14 16:26:33 -07:00
Bagatur
94256d01d3 fix 2024-03-14 16:22:04 -07:00
Bagatur
2523e27330 wip 2024-03-14 16:09:53 -07:00
Tomaz Bratanic
321db89e87 templates: Switch neo4j generation template to LLMGraphTransformer (#19024) 2024-03-14 16:00:42 -07:00
Erick Friis
d5cf360329 ibm[patch]: release 0.1.3 (#19094) 2024-03-14 15:59:42 -07:00
Mateusz Szewczyk
b15d150d22 ibm[patch]: add async tests, add tokenize support (#18898)
- **Description:** add async tests, add tokenize support
- **Dependencies:**
[ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),
  - **Tag maintainer:** 

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally -> 
Please make sure integration_tests passing locally -> 

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-14 22:57:05 +00:00
billytrend-cohere
7253b816cc community: Add support for cohere SDK v5 (keeps v4 backwards compatibility) (#19084)
- **Description:** Add support for cohere SDK v5 (keeps v4 backwards
compatibility)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-14 15:53:24 -07:00
Eugene Yurtsev
06165efb5b core[patch]: RunnablePassthrough transform to autoupgrade to AddableDict (#19051)
Follow up on https://github.com/langchain-ai/langchain/pull/18743 which
missed RunnablePassthrough

Issues:

https://github.com/langchain-ai/langchain/issues/18741
https://github.com/langchain-ai/langgraph/issues/136
https://github.com/langchain-ai/langserve/issues/504
2024-03-14 16:59:46 -04:00
Eugene Yurtsev
41e2f60cd2 Updated security policy (#19089)
Updated security policy
2024-03-14 20:58:47 +00:00
Eugene Yurtsev
6cdca4355d community[minor]: Revamp PGVector Filtering (#18992)
This PR makes the following updates in the pgvector database:

1. Use JSONB field for metadata instead of JSON
2. Update operator syntax to include required `$` prefix before the
operators (otherwise there will be name collisions with fields)
3. The change is non-breaking, old functionality is still the default,
but it will emit a deprecation warning
4. Previous functionality has bugs associated with comparisons due to
casting to text (so lexical ordering is used incorrectly for numeric
fields)
5. Adds an a GIN index on the JSONB field for more efficient querying
2024-03-14 16:56:00 -04:00
Bagatur
e276817e1d docs: fix vercel build script (#19090)
amazon linux 2023 doesn't have `amazon-linux-extras` but shoudl have python3.9 by default
2024-03-14 20:53:43 +00:00
Guangdong Liu
d4b025c812 code[patch]: Add in code documentation to core Runnable assign method (docs only) (#18951)
**PR message**: ***Delete this entire checklist*** and replace with
- **Description:** [a description of the change](docs: Add in code
documentation to core Runnable assign method)
    - **Issue:** the issue  #18804
2024-03-14 15:41:19 -04:00
Anthony Yang
688a5bd106 docs:fixed typo in streaming document (#19045)
Fixed typo in line 661 - from 'mimimize' to 'minimize

- [ ] **PR message**: 
- **Description:** Fixed typo in streaming document - change 'mimimize'
to 'minimize

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-03-14 19:38:53 +00:00
Bagatur
573f48e34d core[patch]: Release 0.1.32 (#19088) 2024-03-14 12:01:58 -07:00
YHW
69a8ef2693 core: Runnable pass kwargs to _astream_log_implementation in astream_log (#19055)
- **Description:** When calling the `_stream_log_implementation` from
the `astream_log` method in the `Runnable` class, it is not handing over
the `kwargs` argument. Therefore, even if i want to customize APIHandler
and implement additional features with additional arguments, it is not
possible. Conversely, the `astream_events` method normally handing over
the `kwargs` argument.
- **Issue:** https://github.com/langchain-ai/langchain/issues/19054
- **Dependencies:**
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

Co-authored-by: hyungwookyang <hyungwookyang@worksmobile.com>
2024-03-14 14:39:46 -04:00
Nuno Campos
751fb7de20 Add new beta StructuredPrompt (#19080)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-03-14 10:40:34 -07:00
Bagatur
0ae39ab30e docs: make links internal (#19063)
So they can be properly link checked
2024-03-14 16:22:56 +00:00
Anton Parkhomenko
ae73b9d839 community[patch]: Fix NotionDBLoader 400 Error by conditionally adding filter parameter (#19075)
- **Description:** This change fixes a bug where attempts to load data
from Notion using the NotionDBLoader resulted in a 400 Bad Request
error. The issue was traced to the unconditional addition of an empty
'filter' object in the request payload, which Notion's API does not
accept. The modification ensures that the 'filter' object is only
included in the payload when it is explicitly provided and not empty,
thus preventing the 400 error from occurring.
- **Issue:** Fixes
[#18009](https://github.com/langchain-ai/langchain/issues/18009)
- **Dependencies:** None
- **Twitter handle:** @gunnzolder

Co-authored-by: Anton Parkhomenko <anton@merge.rocks>
2024-03-14 13:56:57 +00:00
Erick Friis
2999d06938 docs: deprecate old airbyte loader docs (#19048) 2024-03-13 23:18:30 +00:00
Prakul
4c53e31377 docs: Updated index definition and reference to LangChain-MongoDB (#19047)
**Description:** 
Updates to LangChain-MongoDB documentation: updates to the Atlas vector
search index definition

**Issue:** 
NA

**Dependencies:** 
NA

**Twitter handle:** 
iprakul
2024-03-13 15:44:13 -07:00
Erick Friis
5e0c58f9c2 infra: update upload-artifact and download-artifact to v4 (#19044) 2024-03-13 20:08:29 +00:00
Tomaz Bratanic
e5e15c8d59 docs: Add graph construction docs (#18904) 2024-03-13 12:27:58 -07:00
Nuno Campos
2b7c3c548d core[minor]: Add Runnable.batch_as_completed (#17603)
This PR adds `batch as completed` method to the standard Runnable
interface. It takes in a list of inputs and yields the corresponding
outputs as the inputs are completed.
2024-03-13 11:18:02 -07:00
Erick Friis
71d0981f18 templates: fix rag-lancedb dep (#19010) 2024-03-13 04:36:24 +00:00
Erick Friis
74b2c0aa01 templates, cli: more security deps (#19006) 2024-03-12 20:48:56 -07:00
Erick Friis
9052d05442 template: bump more lockfiles (#19003)
- templates: bump lockfile deps
- x
2024-03-13 01:43:33 +00:00
Erick Friis
49f3cc0f6b templates: bump lockfile deps (#19001) 2024-03-13 01:25:45 +00:00
Erick Friis
2ffb2144a6 experimental[patch]: release 0.0.54 (#19000) 2024-03-13 00:38:46 +00:00
Erick Friis
873d06c009 langchain[patch]: release 0.1.12 (#18999) 2024-03-13 00:22:21 +00:00
Leonid Ganeline
9c8523b529 community[patch]: flattening imports 3 (#18939)
@eyurtsev
2024-03-12 15:18:54 -07:00
Erick Friis
af50f21765 community[patch]: release 0.0.28 (#18993) 2024-03-12 21:55:29 +00:00
Erick Friis
4881bb669c core[patch]: release 0.1.31 (#18989) 2024-03-12 19:45:21 +00:00
Erick Friis
a29e8d8594 elasticsearch[patch]: fix integration tests for release (#18980) 2024-03-12 10:22:07 -07:00
Erick Friis
0d1f6c417c elasticsearch[patch]: release 0.1.1 (#18978) 2024-03-12 16:46:22 +00:00
Max Jakob
911ccf9aa6 docs: elasticsearch retriever (#18965)
Add documentation notebook for `ElasticsearchRetriever`.

## Dependencies
- [ ] Release new `langchain-elasticsearch` version 0.2.0 that includes
`ElasticsearchRetriever`
2024-03-12 09:42:36 -07:00
Dobiichi-Origami
471f2ed40a community[patch]: re-arrange the addtional_kwargs of returned qianfan structure to avoid _merge_dict issue (#18889)
fix issue: https://github.com/langchain-ai/langchain/issues/18441
PTAL, thanks
@baskaryan, @efriis, @eyurtsev, @hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-12 05:43:56 +00:00
Naman Jain
75122646b5 core[patch]: fixed circular dependency with json schema (#18657)
**Description:** Circular dependencies when parsing references leading
to `RecursionError: maximum recursion depth exceeded` issue. This PR
address the issue by handling previously seen refs as in any typical DFS
to avoid infinite depths.

**Issue:** https://github.com/langchain-ai/langchain/issues/12163

 **Twitter handle:** https://twitter.com/theBhulawat 


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-12 05:42:45 +00:00
Tymofii
0bec1f6877 commnity[patch]: refactor code for faiss vectorstore, update faiss vectorstore documentation (#18092)
**Description:** Refactor code of FAISS vectorcstore and update the
related documentation.
Details: 
 - replace `.format()` with f-strings for strings formatting;
- refactor definition of a filtering function to make code more readable
and more flexible;
- slightly improve efficiency of
`max_marginal_relevance_search_with_score_by_vector` method by removing
unnecessary looping over the same elements;
- slightly improve efficiency of `delete` method by using set data
structure for checking if the element was already deleted;

**Issue:** fix small inconsistency in the documentation (the old example
was incorrect and unappliable to faiss vectorstore)

**Dependencies:** basic langchain-community dependencies and `faiss`
(for CPU or for GPU)

**Twitter handle:** antonenkodev
2024-03-11 22:33:03 -07:00
Roshan Santhosh
acf1ecc081 langchain[patch]: update llm_router.py (#18865)
Issue : _call method of LLMRouterChain uses predict_and_parse, which is
slated for deprecation.

Description : Instead of using predict_and_parse, this replaces it with
individual predict and parse functions.
2024-03-11 22:30:07 -07:00
Bagatur
18de77cc8c core[minor]: add streaming support to OAI tool parsers (#18940)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-11 21:53:56 -07:00
Bagatur
e0e688a277 core[minor]: generation info on msg (#18592)
related to #16403 #17188
2024-03-12 04:43:17 +00:00
Tomaz Bratanic
cda43c5a11 experimental[patch]: Fix LLM graph transformer default prompt (#18856)
Some LLMs do not allow multiple user messages in sequence.
2024-03-11 20:11:52 -07:00
Bagatur
19721246f5 core[patch]: support labeled json schema as tools (#18935) 2024-03-11 19:51:35 -07:00
Jacob Lee
950ab056eb templates[patch]: Update pirate-speak deps, add messages placeholder (#18949)
CC @efriis
2024-03-11 19:20:30 -07:00
Leonid Ganeline
fad308a764 docs: providers update 2 (#18407)
Formatted pages into a consistent form. Added descriptions and links
when needed.
2024-03-11 18:35:37 -07:00
Erick Friis
239f0a615e templates: redis multi-modal multi-vector rag (#18946)
---------

Co-authored-by: Tyler Hutcherson <tyler.hutcherson@redis.com>
2024-03-12 00:32:25 +00:00
Bagatur
915c1f8673 infra: rm api build CI (#18944) 2024-03-11 16:12:34 -07:00
Brace Sproul
578e67c017 docs[patch]: properly load/use env vars (#18942) 2024-03-11 15:38:05 -07:00
Erick Friis
0d888a65cb core[patch]: move some attr/methods to BaseLanguageModel (#18936)
Cleans up some shared code between `BaseLLM` and `BaseChatModel`. One
functional difference to make it more consistent (see comment)
2024-03-11 14:59:45 -07:00
Brace Sproul
4ff6aa5c78 docs[minor]: Swap gtag for supabase (#18937)
Added deps:
- `@supabase/supabase-js` - for sending inserts
- `supabase` - dev dep, for generating types via cli
- `dotenv` for loading env vars

Added script:
- `yarn gen` - will auto generate the database schema types using the
supabase CLI. Not necessary for development, but is useful. Requires
authing with the supabase CLI (will error out w/ instructions if you're
not authed).

Added functionality:
- pulls users IP address (using a free endpoint: `https://api.ipify.org`
so we can filter out abuse down the line)

TODO:
- [x] add env vars to vercel
2024-03-11 14:23:12 -07:00
aditya thomas
5c2f7e6b2b partners[openai]: update the docstring of OpenAI, OpenAIEmbeddings and ChatOpenAI classes (#18908)
**Description:** Update the docstring of OpenAI, OpenAIEmbeddings and
ChatOpenAI classes
**Issue:** Update import module paths to the current LangChain API
**Dependencies:** None
**Lint and test**: `make format` and `make lint` were run

This incorporates the review comments from langchain-ai/langchain#18637
which I closed due to an issue I had in updating that pr branch

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-11 20:48:54 +00:00
Leonid Ganeline
11195cfa42 community[patch]: speed up import times in the community package (#18928)
This PR speeds up import times in the community package
2024-03-11 16:37:36 -04:00
fjk
a7fc731720 docs: change sparkllm spark_app_url to spark_api_url (#18000)
community: fix - change sparkllm spark_app_url to spark_api_url

- **Description:** 
- Change the variable name from `sparkllm spark_app_url` to
`spark_api_url` in the community package.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-11 20:01:30 +00:00
Sevin F. Varoglu
8639624d40 docs: update OctoAI doc (#18913)
This PR updates the OctoAI LLM doc.
2024-03-11 13:01:10 -07:00
Alexander Kozlov
a7500ab0fb docs: Update huggingface pipelines notebook (#18801) 2024-03-11 20:00:31 +00:00
Conroy Whitney
96d7fe0f85 docs: Change saved/configured chain variable name (#18863)
**Description:**
Variable name was `openai_poem` but it didn't pass in the `"prompt":
"poem"` config, so the examples were showing a joke being returned from
a variable called `*_poem`.

We could have gone one of two ways:

1. Updating the config line and the output line, or
2. Updating the variable name

The latter seemed simpler, so that's what I went with. But I'd be glad
to re-do this PR if you prefer the former.

Thanks for everything, y'all. You rock 🤘

**Issue:** N/A

**Dependencies:** N/A

**Twitter handle:** `conroywhitney`
2024-03-11 12:59:24 -07:00
aditya thomas
8544f748f2 community[patch]: update AnthropicLLM deprecation message (#18869)
**Description:** Update AnthropicLLM deprecation message import path for
ChatAnthropic
**Issue:** Incorrect import path in deprecation message
**Dependencies:** None
**Lint and test**: `make format`, `make lint` and `make test` were run
2024-03-11 12:59:10 -07:00
Virat Singh
cafffe8a21 community: Add PolygonAggregates tool (#18882)
**Description:**
In this PR, I am adding a `PolygonAggregates` tool, which can be used to
get historical stock price data (called aggregates by Polygon) for a
given ticker.

Polygon
[docs](https://polygon.io/docs/stocks/get_v2_aggs_ticker__stocksticker__range__multiplier___timespan___from___to)
for this endpoint.

**Twitter**: 
[@virattt](https://twitter.com/virattt)
2024-03-11 11:58:10 -07:00
Bagatur
2d172181e0 Revert "update api build script (#18930)" (#18931) 2024-03-11 11:47:18 -07:00
Bagatur
def329b5f2 update api build script (#18930) 2024-03-11 11:44:37 -07:00
Bagatur
c24c871d88 docs: update readme diagram (#18929) 2024-03-11 11:17:45 -07:00
Bagatur
34284c25d4 docs: turn on link check (#18924) 2024-03-11 10:50:39 -07:00
Erick Friis
93ef8ead0b mongodb[patch]: fix core dep (#18926) 2024-03-11 10:27:29 -07:00
Mohammad Mohtashim
43db4cd20e core[major]: On Tool End Observation Casting Fix (#18798)
This PR updates the on_tool_end handlers to return the raw output from the tool instead of casting it to a string. 

This is technically a breaking change, though it's impact is expected to be somewhat minimal. It will fix behavior in `astream_events` as well.

Fixes the following issue #18760 raised by @eyurtsev

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-11 10:59:04 -04:00
Prashanth Rao
a96a6e0f2c docs: Fix typo and add KùzuDB to graphs docs (#18915)
- **Description:** Adding Kùzu (an embedded graph DB that uses Cypher)
to the graph docs, and fixing a typo
 - **Issue:** docs update
2024-03-11 14:42:46 +00:00
aditya thomas
3d15498612 docs: Update callbacks documentation (#18899)
**Description:** Update callbacks documentation
**Issue:** Change some module imports and a method invocation to reflect
the current LangChainAPI
**Dependencies:** None
2024-03-11 10:40:11 -04:00
Massimiliano Pronesti
8113d612bb community[patch]: support modin document loader (#18866)
Langchain community document loaders support `pyspark`, `polars`, and
`pandas` dataframes but not `modin`'s. This PR addresses this point.
2024-03-10 18:40:04 -07:00
Leonid Ganeline
dee256ef5a docs: platforms/google fixed broken links (#18878)
Several links are broken. Fixed them.
2024-03-10 18:19:43 -07:00
Pol Ruiz Farre
a7f63d8cb4 community[patch]: Fix BasePDFLoader suffix for s3 presigned urls (#18844)
BasePDFLoader doesn't parse the suffix of the file correctly when
parsing S3 presigned urls. This fix enables the proper detection and
parsing of S3 presigned URLs to prevent errors such as `OSError: [Errno
36] File name too long`.
No additional dependencies required.
2024-03-11 00:58:51 +00:00
Joshua Carroll
ddaf9de169 community: Fix bug with StreamlitChatMessageHistory (#18834)
- **Description:** Fix Streamlit bug which was introduced by
https://github.com/langchain-ai/langchain/pull/18250, update integration
test
- **Issue:** https://github.com/langchain-ai/langchain/issues/18684
- **Dependencies:** None
2024-03-09 13:42:22 -08:00
Kushagra
5fcbe9dd2a community[patch]: documented the feature to filter documents in MongoDBloader (#18842)
"community[docs]: documented the feature to filter documents in
MongoDBloader"
- Description: documented the feature to filter documents in
MongoDBloader
- Feature: the feature
https://github.com/langchain-ai/langchain/discussions/18251
- Dependencies: No
- Twitter handle: https://twitter.com/im_Kushagra
2024-03-09 13:41:34 -08:00
Ikko Eltociear Ashimine
c3580d3c64 docs: fix typo in google_cloud_sql_mysql.ipynb (#18847)
arbitary -> arbitrary
2024-03-09 13:39:36 -08:00
Luan Fernandes
5a006f7264 docs: update typo in docs about agent tools (#18850)
fixes #18849
2024-03-09 13:39:18 -08:00
Leonid Ganeline
3dabd3f214 docs: platform pages update (#17836)
`Integrations` platform page ToC-s: sections there are placed without
order. For example, the
[google](https://python.langchain.com/docs/integrations/platforms/google)
page. The `LLM` section is not the first section, as it is in the
[Components](https://python.langchain.com/docs/integrations/components)
menu.
Updates:
* reorganized the page sections so they follow the Component menu order.
* fixed names for the section names: "Text Embedding Models" ->
"Embedding Models"
2024-03-09 13:34:33 -08:00
Leonid Ganeline
07c518ad3e docs: providers update 4 (#18540)
Created the `facebook` page from `facebook_faiss` and `facebook_chat`
pages. Added another Facebook integrations into this page.
Updated `discord` page.
2024-03-09 13:30:48 -08:00
Leonid Ganeline
9c0f84ae95 docs: providers update 6 (#18610)
Cleaned up the `Integrations/Components/Memory` navbar by shortening the
page titles. Updated page titles and file names to consistent formats.
2024-03-09 13:29:44 -08:00
Tomaz Bratanic
a28be31a96 Switch to md5 for deduplication in neo4j integrations (#18846)
Deduplicate documents using MD5 of the page_content. Also allows for
custom deduplication with graph ingestion method by providing metadata
id attribute

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-03-09 13:28:55 -08:00
Tomaz Bratanic
246724faab LLM graph transformer prompt engineering (#18843)
A bit of prompt engineering to improve results
2024-03-09 11:27:16 -08:00
Tomaz Bratanic
e778d60aec Fix broken link in graph docs (#18837) 2024-03-09 10:40:33 -08:00
Erick Friis
b48865bf94 langchain[patch]: attach hub metadata (#18830) 2024-03-08 18:40:49 -08:00
Ammar
34b31a8cc7 core: add in-code docs for RunnableAssign class (#18826)
**Description:** Improves the docstring for `RunnableAssign` by
providing a concise description and a self-contained code example.
  **Issue:**  #18803
2024-03-09 02:04:52 +00:00
Leonid Ganeline
5d65b47e41 docs: chat menu item as icon (#18806)
Update chat icon in docs
2024-03-08 21:00:21 -05:00
Leonid Ganeline
476d6dc596 community[patch]: Use getattr for toolkits imports (#18825)
This will preserve the namespace, without actually loading the underlying packages on init.
2024-03-08 20:54:28 -05:00
Erick Friis
bbb609ac9d core[patch]: fix arbitrary config keys (#18827) 2024-03-08 17:35:13 -08:00
Luis Antonio Vieira Junior
67c880af74 community[patch]: adding linearization config to AmazonTextractPDFLoader (#17489)
- **Description:** Adding an optional parameter `linearization_config`
to the `AmazonTextractPDFLoader` so the caller can define how the output
will be linearized, instead of forcing a predefined set of linearization
configs. It will still have a default configuration as this will be an
optional parameter.
- **Issue:** #17457
- **Dependencies:** The same ones that already exist for
`AmazonTextractPDFLoader`
- **Twitter handle:** [@lvieirajr19](https://twitter.com/lvieirajr19)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-08 17:25:22 -08:00
Anis ZAKARI
37e89ba5b1 community[patch]: Bedrock add support for mistral models (#18756)
*Description**: My previous
[PR](https://github.com/langchain-ai/langchain/pull/18521) was
mistakenly closed, so I am reopening this one. Context: AWS released two
Mistral models on Bedrock last Friday (March 1, 2024). This PR includes
some code adjustments to ensure their compatibility with the Bedrock
class.

---------

Co-authored-by: Anis ZAKARI <anis.zakari@hymaia.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-09 01:20:38 +00:00
Alexander Dicke
66576948e0 experimental[minor]: adds mixtral wrapper (#17423)
**Description:** Adds a chat wrapper for Mixtral models using the
[prompt
template](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1#instruction-format).

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-08 17:14:23 -08:00
Erick Friis
4f4300723b docs: pinecone client version note (#17491) 2024-03-08 17:09:17 -08:00
Keith Chan
914af69b44 community[patch]: Update azuresearch vectorstore from_texts() method to include fields argument (#17661)
- **Description:** Update azuresearch vectorstore from_texts() method to
include fields argument, necessary for creating an Azure AI Search index
with custom fields.
- **Issue:** Currently index fields are fixed to default fields if Azure
Search index is created using from_texts() method
- **Dependencies:** None
- **Twitter handle:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-08 17:05:35 -08:00
al1p
46f0cea2b9 community[patch][: improved the suffix prompt to avoid loop (#17791)
Small improvement to the openapi prompt.
The agent was not finding the server base URL (looping through all
nodes). This small change narrows the search and enables finding the url
faster.

No dependency 

Twitter : @al1pra
2024-03-08 16:53:09 -08:00
Dmitry Kankalovich
f5117e907d openai[patch]: Proper example for AzureOpenAI usage in error message (#17798)
# Proper example for AzureOpenAI usage in error message

The original error message is wrong in part of a usage example it gives.
Corrected to the right one.

Co-authored-by: Dzmitry Kankalovich <dzmitry_kankalovich@epam.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-08 16:52:55 -08:00
Pranav Agarwal
bd9b5dc2f3 docs: Updating cookbook README for amazon personalize (#17854)
This PR is a successor to this PR -
https://github.com/langchain-ai/langchain/pull/17436
This PR updates the cookbook README with the notebook so that it is
available on langchain docs for discoverability.

cc: @baskaryan, @3coins

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-08 16:52:36 -08:00
AtomicVar
23e62f8f8d docs: fix lists display issue (#17911)
**Description:** Fix lists display issues in **Docs > Use Cases > Q&A
with RAG > Quickstart**.

In essence, this PR changes:

```markdown
Some paragraph.
- Item a.
- Item b.
```

to:

```markdown
Some paragraph.

- Item a.
- Item b.
```

There needs an extra empty line to make the list rendered properly.

FYI, the old version is displayed not properly as:

<img width="856" alt="image"
src="https://github.com/langchain-ai/langchain/assets/22856433/65202577-8ea2-47c6-b310-39bf42796fac">

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-08 16:52:16 -08:00
Théo LEBRUN
cf94091cd0 community[patch]: Skip nested directories when using S3DirectoryLoader (#17829)
- **Description:** `S3DirectoryLoader` is failing if prefix is a folder
(ex: `my_folder/`) because `S3FileLoader` will try to load that folder
and will fail. This PR skip nested directories so prefix can be set to
folder instead of `my_folder/files_prefix`.
- **Issue:**
  - #11917
  - #6535
  - #4326
- **Dependencies:** none
- **Twitter handle:** @Falydoor


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-03-08 16:50:58 -08:00
Venkatesan
7a18b63dbf community[patch]: Mongo index creation (#17748)
- [ ] Title: Mongodb: MongoDB connection performance improvement. 
- [ ] Message: 
- **Description:** I made collection index_creation as optional. Index
Creation is one time process.
- **Issue:** MongoDBChatMessageHistory class object is attempting to
create an index during connection, causing each request to take longer
than usual. This should be optional with a parameter.
    - **Dependencies:** N/A
    - **Branch to be checked:** origin/mongo_index_creation

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-08 16:43:17 -08:00
wt3639
5b5b37a999 community[patch]: Add embedding instruction to HuggingFaceBgeEmbeddings (#18017)
- **Description:** Add embedding instruction to
HuggingFaceBgeEmbeddings, so that it can be compatible with nomic and
other models that need embedding instruction.

---------

Co-authored-by: Tao Wu <tao.wu@rwth-aachen.de>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-08 16:39:29 -08:00
Brace Sproul
9c218d0154 docs[patch]: Update how GA4 is collected (#18821)
There's some issue/setting with the current python GA4 app. I created a
new one just for feedback.
2024-03-08 14:32:40 -08:00
Erick Friis
a8de6d1533 anthropic[patch]: integration test update (#18823) 2024-03-08 13:47:31 -08:00
wewebber-merlin
d1f5bc4906 anthropic[patch]: add kwargs to format_output base (#18715)
_generate() and _agenerate() both accept **kwargs, then pass them on to
_format_output; but _format_output doesn't accept **kwargs. Attempting
to pass, e.g.,

     timeout=50

to _generate (or invoke()) results in a TypeError.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-08 21:47:21 +00:00
Erick Friis
aa7bce6b13 anthropic[patch]: release 0.1.4 (#18822) 2024-03-08 21:34:47 +00:00
Erick Friis
a5bcddc738 anthropic[patch]: streaming param (#18819) 2024-03-08 13:32:57 -08:00
Erick Friis
8c0b215c02 anthropic[patch]: fix format output args (#18816) 2024-03-08 12:34:11 -08:00
Ishani Vyas
2b0cbd65ba community[patch]: Add Passio Nutrition AI Food Search Tool to Community Package (#18278)
## Add Passio Nutrition AI Food Search Tool to Community Package

### Description
We propose adding a new tool to the `community` package, enabling
integration with Passio Nutrition AI for food search functionality. This
tool will provide a simple interface for retrieving nutrition facts
through the Passio Nutrition AI API, simplifying user access to
nutrition data based on food search queries.

### Implementation Details
- **Class Structure:** Implement `NutritionAI`, extending `BaseTool`. It
includes an `_run` method that accepts a query string and, optionally, a
`CallbackManagerForToolRun`.
- **API Integration:** Use `NutritionAIAPI` for the API wrapper,
encapsulating all interactions with the Passio Nutrition AI and
providing a clean API interface.
- **Error Handling:** Implement comprehensive error handling for API
request failures.

### Expected Outcome
- **User Benefits:** Enable easy querying of nutrition facts from Passio
Nutrition AI, enhancing the utility of the `langchain_community` package
for nutrition-related projects.
- **Functionality:** Provide a straightforward method for integrating
nutrition information retrieval into users' applications.

### Dependencies
- `langchain_core` for base tooling support
- `pydantic` for data validation and settings management
- Consider `requests` or another HTTP client library if not covered by
`NutritionAIAPI`.

### Tests and Documentation
- **Unit Tests:** Include tests that mock network interactions to ensure
tool reliability without external API dependency.
- **Documentation:** Create an example notebook in
`docs/docs/integrations/tools/passio_nutrition_ai.ipynb` showing usage,
setup, and example queries.

### Contribution Guidelines Compliance
- Adhere to the project's linting and formatting standards (`make
format`, `make lint`, `make test`).
- Ensure compliance with LangChain's contribution guidelines,
particularly around dependency management and package modifications.

### Additional Notes
- Aim for the tool to be a lightweight, focused addition, not
introducing significant new dependencies or complexity.
- Potential future enhancements could include caching for common queries
to improve performance.

### Twitter Handle
- Here is our Passio AI [twitter handle](https://twitter.com/@passio_ai)
where we announce our products.


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-03-08 20:33:22 +00:00
Aaron Jimenez
bd9f98a20b docs: Fix typo in modules/chains.ipynb (#18808)
**Description:**  

Fix a minor typo in `modules/chains.ipynb`.
 
- **Issue:** 
    fixes #17851
2024-03-08 12:09:20 -08:00
Kushagra
b1f22bf76c community[minor]: added a feature to filter documents in Mongoloader (#18253)
"community: added a feature to filter documents in Mongoloader"
- **Description:** added a feature to filter documents in Mongoloader
    - **Feature:** the feature #18251
    - **Dependencies:** No
    - **Twitter handle:** https://twitter.com/im_Kushagra
2024-03-08 12:06:35 -08:00
Tomaz Bratanic
c0bdd4d45b docs: Add main graph documentation (#18021)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-08 20:03:03 +00:00
Leonid Ganeline
7c8c4e5743 docs: providers update 7 (#18620)
Added missed providers. Added missed integrations. Formatted to the
consistent form. Fixed outdated imports.
2024-03-08 12:00:27 -08:00
Eugene Yurtsev
1f50274df7 community[patch]: Add pgvector to docker compose and update settings used in integration test (#18815) 2024-03-08 14:39:28 -05:00
Erick Friis
ad29806255 nvidia-trt, nvidia-ai-endpoints: move to repo (#18814)
NVIDIA maintained in https://github.com/langchain-ai/langchain-nvidia
2024-03-08 19:30:50 +00:00
Christophe Bornet
e54a49b697 community[minor]: Add lazy_table_reflection param to SqlDatabase (#18742)
For some DBs with lots of tables, reflection of all the tables can take
very long. So this change will make the tables be reflected lazily when
get_table_info() is called and `lazy_table_reflection` is True.
2024-03-08 14:10:23 -05:00
Christophe Bornet
ead2a74806 community: Implement lazy_load() for JSONLoader (#18643)
Covered by `tests/unit_tests/document_loaders/test_json_loader.py`
2024-03-08 13:58:17 -05:00
Erick Friis
a88f62ec3c langchain[patch]: getattr import from langchain.chains (#18160) 2024-03-08 10:36:14 -08:00
kAIto47802
ff70cc4e80 docs: fix typo (#18810)
Fixed typo in docs
2024-03-08 13:28:17 -05:00
Eugene Yurtsev
cdfb5b4ca1 core[minor]: Chat Models to fallback astream to fallback on sync stream if available (#18748)
Allows all chat models that implement _stream, but not _astream to still have async streaming to work.

Amongst other things this should resolve issues with streaming community model implementations through langserve since langserve is exclusively async.
2024-03-08 13:27:29 -05:00
Leonid Ganeline
3624f56ccb docs: update imports of retrievers to use langchain_community (#18707)
Updated `langchain` imports to `langchain_community`.
2024-03-08 13:04:38 -05:00
Leonid Ganeline
48eed86931 docs: update imports of memory to use langchain_community (#18689)
Refactored imports from `langchain` to `langchain_community` whenever it
is applicable
2024-03-08 13:02:31 -05:00
aditya thomas
e00c1ff2b0 infra: ChatOpenAI unit tests for invoke() and ainvoke() (#18792)
**Description:** Replacing the deprecated predict() and apredict()
methods in the unit tests
**Issue:** Not applicable
**Dependencies:** None
**Lint and test**: `make format`, `make lint` and `make test` have been
run
2024-03-08 09:48:38 -08:00
aditya thomas
a35203b164 docs: (minor) update to anthropic doc (#18794)
**Description:** Minor update to Anthropic documentation
**Issue:** Not applicable
**Dependencies:** None
**Lint and test**: `make format` and `make lint` was done
2024-03-08 09:48:04 -08:00
Bagatur
3e29c04213 core[minor]: add BaseMessage.response_metadata (#18699) 2024-03-08 09:35:56 -08:00
standby24x7
67d48ea600 docs:Update function "run" to "invoke" in llm_bash.ipynb (#18663)
This path updates function "run" to "invoke" in llm_bash.ipynb. 
Without this path, you see following warning.

LangChainDeprecationWarning: The function `run` was deprecated in
LangChain 0.1.0
and will be removed in 0.2.0. Use invoke instead.

Signed-off-by: Masanari Iida <standby24x7@gmail.com>
2024-03-08 09:35:36 -08:00
Bagatur
bc6249c889 langchain[patch]: runnable agent streaming param (#18761)
Usage:

```python
agent = RunnableAgent(runnable=runnable, .., stream_runnable=False)
```
or for convenience
```python
agent_executor = AgentExecutor(agent=agent, ..., stream_runnable=False)
```
2024-03-07 20:53:53 -08:00
Tomaz Bratanic
c8c592d3f1 experimental[minor]: Add LLM graph transformer (#18733)
Add a class that constructs knowledge graphs based on text using an LLM.
2024-03-07 20:52:53 -08:00
Phat Vo
3ecb903d49 community[patch] : Tidy up and update Clarifai SDK functions (#18314)
Description :
* Tidy up, add missing docstring and fix unused params
* Enable using session token
2024-03-07 19:47:44 -08:00
Paul Sanders
93b87f2bfb docs: Fix typo (#18545)
Fixing a minor typo in the package name.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-03-07 19:40:42 -08:00
Aaron Jimenez
fcf6213c22 docs: Fix link to HF TEI in text_embeddings_inference.ipynb (#18682)
- [ ] **PR title:** docs: Fix link to HF TEI in
text_embeddings_inference.ipynb
 
- [ ] **PR message:**

- **Description:** Fix the link to [Hugging Face Text Embeddings
Inference
(TEI)](https://huggingface.co/docs/text-embeddings-inference/index) in
text_embeddings_inference.ipynb
   - **Issue:** Fix #18576
2024-03-07 19:38:39 -08:00
Max Jakob
61a2eba081 elasticsearch[patch]: add top-level import, remove obsolete dependency (#18644)
Make `ElasticsearchRetriever` available as top-level import.

The `langchain` package depends on `langchain-community` so we do not
need to depend on it explicitly.
2024-03-07 19:38:31 -08:00
Averi Kitsch
8accee57a9 docs: update Google Cloud database integration docs (#18711)
**Description:** update Google Cloud database integration docs
 **Issue:** NA
**Dependencies:** NA
2024-03-07 19:36:00 -08:00
Tomaz Bratanic
010a234f1e docs: Fix diffbot graph transformer description (#18736)
The previous docstring was invalid
2024-03-07 19:25:41 -08:00
Jan Nissen
b8922480ed core[patch]: improve PydanticOutputParser typing (#18740)
This PR adds generic typing to `PydanticOutputParser` so we get a typed
output from `.parse` instead of `Any`. It should provide a better DX by
way of Intellisense and for anyone strictly typing.

Pre-change:

![Screenshot 2024-03-07 at 10 22
31 AM](https://github.com/langchain-ai/langchain/assets/22690160/fd22dde0-9fdc-4283-b283-4c98f0bc46e5)

Post-change:

![Screenshot 2024-03-07 at 10 26
31 AM](https://github.com/langchain-ai/langchain/assets/22690160/7e23d2b7-8f8c-494f-80b3-187530a173ee)

I haven't dug too deep, but I think a similar change could probably be
added to `JsonOutputParser` so we don't have to pull up `.parse`.

Co-authored-by: Jan Nissen <jan23@gmail.com>
2024-03-07 19:25:24 -08:00
Massimiliano Pronesti
3b975c6ebe experimental[minor]: add support for modin in pandas agent (#18749)
Added support for Intel's
[modin](https://github.com/modin-project/modin) in
`create_pandas_dataframe_agent`.
2024-03-07 19:23:07 -08:00
Tomaz Bratanic
4bfe888717 comunity[patch]: Fix neo4j sanitizing values (#18750)
Fixing sanitization for when deeply nested lists appear
2024-03-07 19:21:52 -08:00
Ian
7f504c1f81 docs: Improve the tidb vector store notebook (#18773)
Remove redundant useless content, and fix some minor oversight
2024-03-07 19:15:55 -08:00
Eugene Yurtsev
6caceb5473 core[patch]: Automatic upgrade to AddableDict in transform and atransform (#18743)
Automatic upgrade to transform and atransform

Closes: 

https://github.com/langchain-ai/langchain/issues/18741
https://github.com/langchain-ai/langgraph/issues/136
https://github.com/langchain-ai/langserve/issues/504
2024-03-07 21:23:12 -05:00
Yunmo Koo
fee6f983ef community[minor]: Integration for Friendli LLM and ChatFriendli ChatModel. (#17913)
## Description
- Add [Friendli](https://friendli.ai/) integration for `Friendli` LLM
and `ChatFriendli` chat model.
- Unit tests and integration tests corresponding to this change are
added.
- Documentations corresponding to this change are added.

## Dependencies
- Optional dependency
[`friendli-client`](https://pypi.org/project/friendli-client/) package
is added only for those who use `Frienldi` or `ChatFriendli` model.

## Twitter handle
- https://twitter.com/friendliai
2024-03-08 02:20:47 +00:00
Smit Parmar
aed46cd6f2 community[patch]: Added support for filter out AWS Kendra search by score confidence (#12920)
**Description:** It will add support for filter out kendra search by
score confidence which will make result more accurate.
    For example
   ```
retriever = AmazonKendraRetriever(
        index_id=kendra_index_id, top_k=5, region_name=region,
        score_confidence="HIGH"
    )
```
Result will not include the records which has score confidence "LOW" or "MEDIUM". 
Relevant docs 
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/kendra/client/query.html
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/kendra/client/retrieve.html

 **Issue:** the issue # it resolve #11801 
**twitter:** [@SmitCode](https://twitter.com/SmitCode)
2024-03-07 17:28:09 -08:00
Ian
390ef6abe3 community[minor]: Add Initial Support for TiDB Vector Store (#15796)
This pull request introduces initial support for the TiDB vector store.
The current version is basic, laying the foundation for the vector store
integration. While this implementation provides the essential features,
we plan to expand and improve the TiDB vector store support with
additional enhancements in future updates.

Upcoming Enhancements:
* Support for Vector Index Creation: To enhance the efficiency and
performance of the vector store.
* Support for max marginal relevance search. 
* Customized Table Structure Support: Recognizing the need for
flexibility, we plan for more tailored and efficient data store
solutions.

Simple use case exmaple

```python
from typing import List, Tuple
from langchain.docstore.document import Document
from langchain_community.vectorstores import TiDBVectorStore
from langchain_openai import OpenAIEmbeddings

db = TiDBVectorStore.from_texts(
    embedding=embeddings,
    texts=['Andrew like eating oranges', 'Alexandra is from England', 'Ketanji Brown Jackson is a judge'],
    table_name="tidb_vector_langchain",
    connection_string=tidb_connection_url,
    distance_strategy="cosine",
)

query = "Can you tell me about Alexandra?"
docs_with_score: List[Tuple[Document, float]] = db.similarity_search_with_score(query)
for doc, score in docs_with_score:
    print("-" * 80)
    print("Score: ", score)
    print(doc.page_content)
    print("-" * 80)
```
2024-03-07 17:18:20 -08:00
Bagatur
3b1eb1f828 community[patch]: chat hf typing fix (#18693) 2024-03-07 17:06:38 -08:00
Eugene Yurtsev
1e1cac50d8 Docs: remove sales from security (#18762)
Remove sales from security
2024-03-07 17:35:46 -05:00
Jib
d60e93b6ae langchain-mongodb: Standardize mongodb collection/index names in tests (#18755)
## **Description:**
MongoDB integration tests link to a provided Atlas Cluster. We have very
stringent permissions set against the cluster provided. In order to make
it easier to track and isolate the collections each test gets run
against, we've updated the collection names to map the test file name.
i.e. `langchain_{filename}` => `langchain_test_vectorstores`

Fixes integration test results

![image](https://github.com/langchain-ai/langchain/assets/2887713/41f911b9-55f7-4fe4-9134-5514b82009f9)

## **Dependencies:** 
Provided MONGODB_ATLAS_URI

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

cc: @shaneharvey, @blink1073 , @NoahStapp , @caseyclements
2024-03-07 17:16:04 -05:00
Eugene Yurtsev
ca299a8e08 Docs: Add custom parsing documentation and extending langchain (#18331)
* Added extending langchain.mdx -- we'll need to add links as we add
more custom documentation
* Added partial documentation about parsers
2024-03-07 16:30:57 -05:00
Eugene Yurtsev
8c71f92cb2 core: upgrade mypy to recent mypy (#18753)
Testing this works per package on CI
2024-03-07 15:25:19 -05:00
Eugene Yurtsev
e188d4ecb0 Add dangerous parameter to requests tool (#18697)
The tools are already documented as dangerous. Not clear whether adding
an opt-in parameter is necessary or not
2024-03-07 15:10:56 -05:00
Leonid Ganeline
dad949eb99 docs: update imports of adapters to use langchain_community (#18751)
Updated imports from `langchain` to `langchain_community`
2024-03-07 15:04:25 -05:00
Erick Friis
fcaa9cf2f1 community[patch]: deprecate community anthropic (#18745) 2024-03-07 13:51:55 -05:00
Erick Friis
1beb84b061 community[patch]: move pdf text tests to integration (#18746) 2024-03-07 10:34:22 -08:00
Christophe Bornet
4a7d73b39d community: If load() has been overridden, use it in default lazy_load() (#18690) 2024-03-07 11:52:19 -05:00
Christophe Bornet
6cd7607816 community[patch]: Implement lazy_load() for MHTMLLoader (#18648)
Covered by `tests/unit_tests/document_loaders/test_mhtml.py`
2024-03-07 11:50:18 -05:00
axiangcoding
9745b5894d community[patch]: Chroma use uuid4 instead of uuid1 to generate random ids (#18723)
- **Description:** Chroma use uuid4 instead of uuid1 as random ids. Use
uuid1 may leak mac address, changing to uuid4 will not cause other
effects.
  - **Issue:** None
  - **Dependencies:** None
  - **Twitter handle:** None
2024-03-07 11:48:25 -05:00
Leonid Ganeline
1af2130ff7 docs: update imports of tools to use langchain_community (#18705)
Updated imports from `langchain` to `langchain_community`.
2024-03-07 11:46:09 -05:00
Guangdong Liu
ced5e7bae7 community[patch]: Fix sparkllm authentication problem. (#18651)
- **Description:** fix sparkllm authentication problem.The current
timestamp is in RFC1123 format. The time deviation must be controlled
within 300s. I changed to re-obtain the url every time I ask a question.
https://www.xfyun.cn/doc/spark/general_url_authentication.html#_1-2-%E9%89%B4%E6%9D%83%E5%8F%82%E6%95%B0
2024-03-06 18:43:16 -08:00
Erick Friis
89d32ffbbd community[patch]: release 0.0.27 (#18708) 2024-03-07 01:08:43 +00:00
Erick Friis
c09b520ce4 core[patch]: release 0.1.30 (#18706) 2024-03-06 16:12:18 -08:00
Piyush Jain
2b234a4d96 Support for claude v3 models. (#18630)
Fixes #18513.

## Description
This PR attempts to fix the support for Anthropic Claude v3 models in
BedrockChat LLM. The changes here has updated the payload to use the
`messages` format instead of the formatted text prompt for all models;
`messages` API is backwards compatible with all models in Anthropic, so
this should not break the experience for any models.


## Notes
The PR in the current form does not support the v3 models for the
non-chat Bedrock LLM. This means, that with these changes, users won't
be able to able to use the v3 models with the Bedrock LLM. I can open a
separate PR to tackle this use-case, the intent here was to get this out
quickly, so users can start using and test the chat LLM. The Bedrock LLM
classes have also grown complex with a lot of conditions to support
various providers and models, and is ripe for a refactor to make future
changes more palatable. This refactor is likely to take longer, and
requires more thorough testing from the community. Credit to PRs
[18579](https://github.com/langchain-ai/langchain/pull/18579) and
[18548](https://github.com/langchain-ai/langchain/pull/18548) for some
of the code here.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-06 15:46:18 -08:00
Sam Khano
1b4dcf22f3 community[minor]: Add DocumentDBVectorSearch VectorStore (#17757)
**Description:**
- Added Amazon DocumentDB Vector Search integration (HNSW index)
- Added integration tests
- Updated AWS documentation with DocumentDB Vector Search instructions
- Added notebook for DocumentDB integration with example usage

---------

Co-authored-by: EC2 Default User <ec2-user@ip-172-31-95-226.ec2.internal>
2024-03-06 15:11:34 -08:00
Vittorio Rigamonti
51f3902bc4 community[minor]: Adding support for Infinispan as VectorStore (#17861)
**Description:**
This integrates Infinispan as a vectorstore.
Infinispan is an open-source key-value data grid, it can work as single
node as well as distributed.

Vector search is supported since release 15.x 

For more: [Infinispan Home](https://infinispan.org)

Integration tests are provided as well as a demo notebook
2024-03-06 15:11:02 -08:00
Max Jakob
cca0167917 elasticsearch[patch], community[patch]: update references, deprecate community classes (#18506)
Follow up on https://github.com/langchain-ai/langchain/pull/17467.

- Update all references to the Elasticsearch classes to use the partners
package.
- Deprecate community classes.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-06 15:09:12 -08:00
José Luis Di Biase
6041ec3dd1 templates: rag-multi-modal typo, replace serch with search (#18519)
Thank you for contributing to LangChain!

- [x] **PR title**: "templates: rag-multi-modal typo, replace serch with
search "
- **Description:** Two little typos in multi modal templates (replace
serch string with search)

Signed-off-by: José Luis Di Biase <josx@interorganic.com.ar>
2024-03-06 15:08:55 -08:00
Djordje
12b4a4d860 community[patch]: Opensearch delete method added - indexing supported (#18522)
- **Description:** Added delete method for OpenSearchVectorSearch,
therefore indexing supported
    - **Issue:** No
    - **Dependencies:** No
    - **Twitter handle:** stkbmf
2024-03-06 15:08:47 -08:00
Erick Friis
687d27567d openai[patch]: unit test azure init (#18703) 2024-03-06 14:17:09 -08:00
Christophe Bornet
db8db6faae community: Implement lazy_load() for PlaywrightURLLoader (#18676)
Integration tests:
`tests/integration_tests/document_loaders/test_url_playwright.py`
2024-03-06 16:52:13 -05:00
Aaron Yi
c092db862e community[patch]: make metadata and text optional as expected in DocArray (#18678)
ValidationError: 2 validation errors for DocArrayDoc
text
Field required [type=missing, input_value={'embedding': [-0.0191128...9, 0.01005221541175212]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing
metadata
Field required [type=missing, input_value={'embedding': [-0.0191128...9, 0.01005221541175212]}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/missing
```
In the `_get_doc_cls` method, the `DocArrayDoc` class is defined as
follows:

```python
class DocArrayDoc(BaseDoc):
    text: Optional[str]
    embedding: Optional[NdArray] = Field(**embeddings_params)
    metadata: Optional[dict]
```
2024-03-06 16:51:41 -05:00
Eugene Yurtsev
4c25b49229 community[major]: breaking change in some APIs to force users to opt-in for pickling (#18696)
This is a PR that adds a dangerous load parameter to force users to opt in to use pickle.

This is a PR that's meant to raise user awareness that the pickling module is involved.
2024-03-06 16:43:01 -05:00
Eugene Yurtsev
0e52961562 community[patch]: Patch tdidf retriever (CVE-2024-2057) (#18695)
This is a patch for `CVE-2024-2057`:
https://www.cve.org/CVERecord?id=CVE-2024-2057

This affects users that: 

* Use the  `TFIDFRetriever`
* Attempt to de-serialize it from an untrusted source that contains a
malicious payload
2024-03-06 15:49:04 -05:00
Leonid Ganeline
81cbf0f2fd docs: update import paths for callbacks to use langchain_community callbacks where applicable (#18691)
Refactored imports from `langchain` to `langchain_community` whenever it
is applicable
2024-03-06 14:49:06 -05:00
Erick Friis
2619420df1 mongodb[patch]: release 0.1.1 (#18692) 2024-03-06 19:44:14 +00:00
Leonid Ganeline
fb686333ac docs: fix streamlit provider (#18606)
There is a wrong python package import.
Fixed it.
2024-03-06 11:42:26 -08:00
Christophe Bornet
ea141511d8 core: Move document loader interfaces to core (#17723)
This is needed to be able to move document loaders to partner packages.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-06 13:59:00 -05:00
aditya thomas
97de498d39 docs: update to the streaming tutorial notebook in the lcel documentation (#18378)
**Description:** Update to the streaming tutorial notebook in the LCEL
documentation
**Issue:** Fixed an import and (minor) changes in documentation language
**Dependencies:** None
2024-03-06 10:47:22 -08:00
Guangdong Liu
32db9e74e4 docs: Fix some issues with sparkllm use cases (#17674) 2024-03-06 10:46:51 -08:00
Christophe Bornet
5985454269 Merge pull request #18539
* Implement lazy_load() for GitLoader
2024-03-06 13:25:14 -05:00
Christophe Bornet
9a6f7e213b Merge pull request #18423
* Implement lazy_load() for BSHTMLLoader
2024-03-06 13:25:01 -05:00
Christophe Bornet
b3a0c44838 Merge pull request #18673
* Implement lazy_load() for PDFMinerPDFasHTMLLoader and PyMuPDFLoader
2024-03-06 13:24:36 -05:00
Christophe Bornet
68fc0cf909 Merge pull request #18674
* Implement lazy_load() for TextLoader
2024-03-06 13:23:42 -05:00
Christophe Bornet
5b92f962f1 Merge pull request #18671
* Implement lazy_load() for MastodonTootsLoader
2024-03-06 13:23:14 -05:00
Christophe Bornet
15b1770326 Merge pull request #18421
* Implement lazy_load() for AssemblyAIAudioTranscriptLoader
2024-03-06 13:16:05 -05:00
Christophe Bornet
bb284eebe4 Merge pull request #18436
* Implement lazy_load() for ConfluenceLoader
2024-03-06 13:15:24 -05:00
Christophe Bornet
691480f491 Merge pull request #18647
* Implement lazy_load() for UnstructuredBaseLoader
2024-03-06 13:13:10 -05:00
Christophe Bornet
52ac67c5d8 Merge pull request #18654
* Implement lazy_load() for ObsidianLoader
2024-03-06 13:06:55 -05:00
Christophe Bornet
b9c0cf9025 Merge pull request #18656
* Implement lazy_load() for PsychicLoader
2024-03-06 13:05:04 -05:00
Christophe Bornet
aa7ac57b67 community: Implement lazy_load() for TrelloLoader (#18658)
Covered by `tests/unit_tests/document_loaders/test_trello.py`
2024-03-06 13:04:36 -05:00
Christophe Bornet
302985fea1 community: Implement lazy_load() for SlackDirectoryLoader (#18675)
Integration tests:
`tests/integration_tests/document_loaders/test_slack.py`
2024-03-06 13:04:13 -05:00
Christophe Bornet
ed36f9f604 community: Implement lazy_load() for WhatsAppChatLoader (#18677)
Integration test:
`tests/integration_tests/document_loaders/test_whatsapp_chat.py`
2024-03-06 13:03:46 -05:00
Christophe Bornet
f414f5cdb9 community[minor]: Implement lazy_load() for WikipediaLoader (#18680)
Integration test:
`tests/integration_tests/document_loaders/test_wikipedia.py`
2024-03-06 13:03:21 -05:00
Bagatur
4cbfeeb1c2 community[patch]: Release 0.0.26 (#18683) 2024-03-06 09:41:18 -08:00
Eugene Yurtsev
b9f3c7a0c9 Use Case: Extraction set temperature to 0, qualify a statement (#18672)
Minor changes:
1) Set temperature to 0 (important)
2) Better qualify one of the statements with confidence
2024-03-06 12:35:45 -05:00
Eugene Yurtsev
a4a6978224 Docs: Revamp Extraction Use Case (#18588)
Revamp the extraction use case documentation

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-03-06 09:18:25 -05:00
Christophe Bornet
1100f8de7a community[minor]: Implement lazy_load() for ArxivLoader (#18664)
Integration tests: `tests/integration_tests/utilities/test_arxiv.py` and
`tests/integration_tests/document_loaders/test_arxiv.py`
2024-03-06 09:16:49 -05:00
Christophe Bornet
2d96803ddd community[minor]: Implement lazy_load() for OutlookMessageLoader (#18668)
Integration test:
`tests/integration_tests/document_loaders/test_email.py`
2024-03-06 09:15:57 -05:00
Christophe Bornet
ae167fb5b2 community[minor]: Implement lazy_load() for SitemapLoader (#18667)
Integration tests: `test_sitemap.py` and `test_docusaurus.py`
2024-03-06 09:15:35 -05:00
Christophe Bornet
623dfcc55c community[minor]: Implement lazy_load() for FacebookChatLoader (#18669)
Integration test:
`tests/integration_tests/document_loaders/test_facebook_chat.py`
2024-03-06 09:15:00 -05:00
Christophe Bornet
20794bb889 community[minor]: Implement lazy_load() for GitbookLoader (#18670)
Integration test:
`tests/integration_tests/document_loaders/test_gitbook.py`
2024-03-06 09:14:36 -05:00
Liang Zhang
81985b31e6 community[patch]: Databricks SerDe uses cloudpickle instead of pickle (#18607)
- **Description:** Databricks SerDe uses cloudpickle instead of pickle
when serializing a user-defined function transform_input_fn since pickle
does not support functions defined in `__main__`, and cloudpickle
supports this.
- **Dependencies:** cloudpickle>=2.0.0

Added a unit test.
2024-03-05 18:04:45 -08:00
Erick Friis
f3e28289f6 infra: reorder api docs build steps (#18618) 2024-03-05 17:33:36 -08:00
Leonid Ganeline
114d64d4a7 docs: providers update (#18527)
Added missed pages. Added links and descriptions. Foratted to the
consistent form.
2024-03-05 17:32:59 -08:00
Christophe Bornet
7d6de96186 community[patch]: Implement lazy_load() for CubeSemanticLoader (#18535)
Covered by `test_cube_semantic.py`
2024-03-05 17:32:31 -08:00
Christophe Bornet
a6b5d45e31 community[patch]: Implement lazy_load() for EverNoteLoader (#18538)
Covered by `test_evernote_loader.py`
2024-03-05 17:29:52 -08:00
PSV
d7dd3cd248 docs: structured_output (#18608)
- **Description:** Fixed some typos and copy errors in the Beta
Structured Output docs
    - **Issue:** N/A
    - **Dependencies:** Docs only
    - **Twitter handle:** @psvann

Co-authored-by: P.S. Vann <psvann@yahoo.com>
2024-03-05 17:20:06 -08:00
Bagatur
29f1619d61 docs: why lcel nit (#18616) 2024-03-05 17:10:47 -08:00
Max Jakob
ee7a7954b9 elasticsearch: add ElasticsearchRetriever (#18587)
Implement
[Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/)
interface for Elasticsearch.

I opted to only expose the `body`, which gives you full flexibility, and
none the other 68 arguments of the [search
method](https://elasticsearch-py.readthedocs.io/en/v8.12.1/api/elasticsearch.html#elasticsearch.Elasticsearch.search).

Added a user agent header for usage tracking in Elastic Cloud.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-06 00:42:50 +00:00
Jib
8bc347c5fc mongodb[patch]: include LLM caches in toplevel library import (#18601) 2024-03-05 16:35:13 -08:00
Bagatur
080904689c docs: text splitters install (#18589) 2024-03-05 16:19:37 -08:00
Sunchao Wang
dc81dba6cf community[patch]: Improve amadeus tool and doc (#18509)
Description:

This pull request addresses two key improvements to the langchain
repository:

**Fix for Crash in Flight Search Interface**:

Previously, the code would crash when encountering a failure scenario in
the flight ticket search interface. This PR resolves this issue by
implementing a fix to handle such scenarios gracefully. Now, the code
handles failures in the flight search interface without crashing,
ensuring smoother operation.

**Documentation Update for Amadeus Toolkit**:

Prior to this update, examples provided in the documentation for the
Amadeus Toolkit were unable to run correctly due to outdated
information. This PR includes an update to the documentation, ensuring
that all examples can now be executed successfully. With this update,
users can effectively utilize the Amadeus Toolkit with accurate and
functioning examples.
These changes aim to enhance the reliability and usability of the
langchain repository by addressing issues related to error handling and
ensuring that documentation remains up-to-date and actionable.

Issue: https://github.com/langchain-ai/langchain/issues/17375

Twitter Handle: SingletonYxx
2024-03-05 16:17:22 -08:00
Christophe Bornet
f77f7dc3ec community[patch]: Fix VectorStoreQATool (#18529)
Fix #18460
2024-03-05 15:56:58 -08:00
Utkarsh Kapil
539a13dbda docs: minor spelling errors (#18429)
Description: Noticed spelling errors. 'Colab' mispelt as 'Collab'.
https://python.langchain.com/docs/use_cases
Dependencies: n/a
2024-03-05 15:54:15 -08:00
Dounx
ad48f55357 community[minor]: add Yuque document loader (#17924)
This pull request support loading documents from Yuque with Langchain.

Yuque is a professional cloud-based knowledge base for team
collaboration in documentation.

Website: https://www.yuque.com
OpenAPI: https://www.yuque.com/yuque/developer/openapi
2024-03-05 15:54:07 -08:00
Kazuki Maeda
60c5d964a8 community[minor]: use jq schema for content_key in json_loader (#18003)
### Description
Changed the value specified for `content_key` in JSONLoader from a
single key to a value based on jq schema.
I created [similar
PR](https://github.com/langchain-ai/langchain/pull/11255) before, but it
has several conflicts because of the architectural change associated
stable version release, so I re-create this PR to fit new architecture.

### Why
For json data like the following, specify `.data[].attributes.message`
for page_content and `.data[].attributes.id` or
`.data[].attributes.attributes. tags`, etc., the `content_key` must also
parse the json structure.

<details>
<summary>sample json data</summary>

```json
{
  "data": [
    {
      "attributes": {
        "message": "message1",
        "tags": [
          "tag1"
        ]
      },
      "id": "1"
    },
    {
      "attributes": {
        "message": "message2",
        "tags": [
          "tag2"
        ]
      },
      "id": "2"
    }
  ]
}
```

</details>

<details>
<summary>sample code</summary>

```python
def metadata_func(record: dict, metadata: dict) -> dict:

    metadata["source"] = None
    metadata["id"] = record.get("id")
    metadata["tags"] = record["attributes"].get("tags")

    return metadata

sample_file = "sample1.json"
loader = JSONLoader(
    file_path=sample_file,
    jq_schema=".data[]",
    content_key=".attributes.message", ## content_key is parsable into jq schema
    is_content_key_jq_parsable=True, ## this is added parameter
    metadata_func=metadata_func
)

data = loader.load()
data
```

</details>

### Dependencies
none

### Twitter handle
[kzk_maeda](https://twitter.com/kzk_maeda)
2024-03-05 15:51:24 -08:00
Rodrigo Nogueira
f4bb33bbf3 docs: fix link and missing package (#18405)
**Issue:** fix broken links and missing package on colab example
2024-03-05 15:50:06 -08:00
Max Jakob
81e9ab6e3a docs: Update elasticsearch README (#18497)
Update Elasticsearch README with information on how to start a
deployment.

Also make some cosmetic changes to the [Elasticsearch
docs](https://python.langchain.com/docs/integrations/vectorstores/elasticsearch).

Follow up on https://github.com/langchain-ai/langchain/pull/17467
2024-03-05 15:49:16 -08:00
Hech
6a08134661 community[patch], langchain[minor]: Add retriever self_query and score_threshold in DingoDB (#18106) 2024-03-05 15:47:29 -08:00
Mikhail Khludnev
d039dcb6ba nvidia-trt[patch]: add TritonTensorRTLLM(verbose_client=False) (#16848)
- **Description:** adding verbose flag to TritonTensorRTLLM, 
  - **Issue:** nope,
  - **Dependencies:** not any,
  - **Twitter handle:**
2024-03-05 15:44:13 -08:00
Bagatur
1569b19191 docs: query analysis links (#18614) 2024-03-05 15:05:44 -08:00
Asaf Joseph Gardin
27441555d0 ai21[patch]: AI21 Labs Contextual Answers support (#18270)
Description: Added support for AI21 Labs model - Contextual Answers
Dependencies: ai21, ai21-tokenizer
Twitter handle: https://github.com/AI21Labs

---------

Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-05 22:42:04 +00:00
Erick Friis
e169ee8863 anthropic[patch]: handle lists in function calling (#18609) 2024-03-05 14:19:40 -08:00
Erick Friis
1831733c2e anthropic[patch]: fix argument integration test (#18605) 2024-03-05 13:05:25 -08:00
Leonid Ganeline
bd4993141d docs: providers update 5 (#18550)
Added missed sections. Added descriptions.
2024-03-05 12:55:13 -08:00
Yudhajit Sinha
4570b477b9 community[patch]: Invoke callback prior to yielding token (titan_takeoff) (#18560)
## PR title
community[patch]: Invoke callback prior to yielding token

## PR message
- Description: Invoke callback prior to yielding token in _stream_
method in llms/titan_takeoff.
- Issue: #16913 
- Dependencies: None
2024-03-05 12:54:26 -08:00
Tomaz Bratanic
ea51cdaede Remove neo4j bloom labels from graph schema (#18564)
Neo4j tools use particular node labels and relationship types to store
metadata, but are irrelevant for text2cypher or graph generation, so we
want to ignore them in the schema representation.
2024-03-05 12:54:05 -08:00
standby24x7
a2779738aa docs:Update function "run" to "invoke" in smart_llm.ipynb (#18568)
This patch updates function "run" to "invoke" in smart_llm.ipynb.
Without this patch, you see following warning.

LangChainDeprecationWarning: The function `run` was deprecated in
LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.

    Signed-off-by: Masanari Iida <standby24x7@gmail.com>

Signed-off-by: Masanari Iida <standby24x7@gmail.com>
2024-03-05 12:52:48 -08:00
Erick Friis
e1924b3e93 core[patch]: deprecate hwchase17/langchain-hub, address path traversal (#18600)
Deprecates the old langchain-hub repository. Does *not* deprecate the
new https://smith.langchain.com/hub

@PinkDraconian has correctly raised that in the event someone is loading
unsanitized user input into the `try_load_from_hub` function, they have
the ability to load files from other locations in github than the
hwchase17/langchain-hub repository.

This PR adds some more path checking to that function and deprecates the
functionality in favor of the hub built into LangSmith.
2024-03-05 12:49:38 -08:00
Reuben Zotz-Wilson
96cd50938a community:update telegram notebook (#18569)
**Description:** 
modified the user_name to username to conform with the expected inputs
to TelegramChatApiLoader

**Issue:**
Current code fails in langchain-community 0.0.24 
<loader = TelegramChatApiLoader(
    chat_entity="<CHAT_URL>",  # recommended to use Entity here
    api_hash="<API HASH >",
    api_id="<API_ID>",
    user_name="",  # needed only for caching the session.
)>
2024-03-05 11:47:17 -08:00
Jib
fc35262356 langchain-mongodb: add unit tests for MongoDBChatMessageHistory (#18599)
## Description
Adding in Unit Test variation for `MongoDBChatMessageHistory` package
Follow-up to #18590 

- [x] **Add tests and docs**: Unit test is what's being added
  

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-03-05 11:44:31 -08:00
Erick Friis
48e303ea10 airbyte[patch]: release 0.1.1, python 3.9 compat (#18597) 2024-03-05 19:22:08 +00:00
Jib
9da1e0cf34 mongodb[patch]: Migrate MongoDBChatMessageHistory (#18590)
## **Description** 
Migrate the `MongoDBChatMessageHistory` to the managed
`langchain-mongodb` partner-package
## **Dependencies**
None
## **Twitter handle**
@mongodb

## **tests and docs**
- [x] Migrate existing integration test
- [x ]~ Convert existing integration test to a unit test~ Creation is
out of scope for this ticket
- [x ] ~Considering delaying work until #17470 merges to leverage the
`MockCollection` object. ~
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-05 18:53:02 +00:00
Jib
f92f7d2e03 mongodb[minor]: Add MongoDB LLM Cache (#17470)
# Description

- **Description:** Adding MongoDB LLM Caching Layer abstraction
- **Issue:** N/A
- **Dependencies:** None
- **Twitter handle:** @mongodb

Checklist:

- [x] PR title: Please title your PR "package: description", where
"package" is whichever of langchain, community, core, experimental, etc.
is being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"
- [x] PR Message (above)
- [x] Pass lint and test: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified to check that you're
passing lint and testing. See contribution guidelines for more
information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
- [ ] Add tests and docs: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @efriis, @eyurtsev, @hwchase17.

---------

Co-authored-by: Jib <jib@byblack.us>
2024-03-05 10:38:39 -08:00
Tomaz Bratanic
449d8781ec Update link in neo4j semantic ollama templates (#18574) 2024-03-05 09:42:34 -08:00
Tomaz Bratanic
353248838d Add precedence for input params over env variables in neo4j integration (#18581)
input parameters take precedence over env variables
2024-03-05 09:36:56 -08:00
Christophe Bornet
c8a171a154 community: Implement lazy_load() for GithubFileLoader (#18584) 2024-03-05 09:35:50 -08:00
Leonid Kuligin
04d134df17 marked MatchingEngine as deprecated (#18585)
Thank you for contributing to LangChain!

- [ ] **PR title**: "community: deprecate vectorstores.MatchingEngine"


- [ ] **PR message**: 
- **Description:** announced a deprecation since this integration has
been moved to langchain_google_vertexai
2024-03-05 09:34:53 -08:00
Erick Friis
07f23c2d45 docs: anthropic multimodal (#18586) 2024-03-05 16:58:06 +00:00
Erick Friis
4ac2cb4adc anthropic[minor]: add tool calling (#18554) 2024-03-05 08:30:16 -08:00
Bagatur
5fc67ca2c7 langchain[patch]: Release 0.1.11 (#18558) 2024-03-04 23:58:34 -08:00
Erick Friis
68c1878380 anthropic[patch]: model type string (#18510) 2024-03-04 19:25:19 -08:00
Akash A Desai
eb0756f3ee templates: fix rag-lancedb template (#18551) 2024-03-04 18:56:16 -08:00
Erick Friis
25c7d52140 anthropic[patch]: multimodal (#18517)
- anthropic[minor]: claude 3
- x
- x

---------

Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
2024-03-04 17:50:13 -08:00
Erick Friis
343438e872 community[patch]: deprecate community fireworks (#18544) 2024-03-05 01:04:26 +00:00
William FH
ca1d42785d Evals wording (#18542) 2024-03-04 16:32:33 -08:00
Brace Sproul
328a498a78 docs[minor]: Add thumbs up/down to all docs pages (#18526) 2024-03-04 15:14:28 -08:00
Erick Friis
10874d5002 docs: update stack graphic (#18532) 2024-03-04 23:07:28 +00:00
Bagatur
dd07eddf24 core[patch]: Release 0.1.29 (#18530) 2024-03-04 14:37:08 -08:00
William FH
30ccc009e6 [Evals] Support list examples by dataset version tag (#18534)
previously only supported by timestamp
2024-03-04 14:23:32 -08:00
Lance Martin
72ae744588 RAPTOR (#18467)
Cookbook for RAPTOR paper
2024-03-04 13:16:33 -08:00
aditya thomas
7803b973c7 docs: update documentation of stackexchange component (#18486)
**Description:** Update documentation of the StackExchange component
**Issue:** None
**Dependencies:** None
2024-03-04 10:45:29 -08:00
aditya thomas
5c387a173f docs: update to docstrings of ChatAnthropic class (#18493)
**Description:** Update docstrings of ChatAnthropic class
**Issue:** Change to ChatAnthropic from ChatAnthropicMessages
**Dependencies:** None
**Lint and test**:  `make format`, `make lint` and `make test` passed
2024-03-04 10:44:54 -08:00
Martin Kolb
63702a2044 docs: Improved notebook for vector store "HANA Cloud" (#18496)
- **Description:**
This PR fixes some issues in the Jupyter notebook for the VectorStore
"SAP HANA Cloud Vector Engine":
    * Slight textual adaptations
    * Fix of wrong column name VEC_META (was: VEC_METADATA)

  - **Issue:** N/A
  - **Dependencies:** no new dependecies added
  - **Twitter handle:** @sapopensource

path to notebook:
`docs/docs/integrations/vectorstores/hanavector.ipynb`
2024-03-04 10:44:16 -08:00
standby24x7
8461700738 docs: Update function "run" to "invoke" (#18499)
Currently llm_checker.ipynb uses a function "run".
Update to "invoke" to avoid following warning.

LangChainDeprecationWarning: The function `run` was deprecated in
LangChain 0.1.0
and will be removed in 0.2.0. Use invoke instead.

Signed-off-by: Masanari Iida <standby24x7@gmail.com>
2024-03-04 10:42:53 -08:00
standby24x7
6c9177681d docs: Update function "run" to "invoke" in llm_math.ipynb (#18505)
This patch updates function "run" to "invoke".
Without this patch you see following warning.

LangChainDeprecationWarning: The function `run` was deprecated in
LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.

Signed-off-by: Masanari Iida <standby24x7@gmail.com>
2024-03-04 10:42:36 -08:00
Bagatur
1c1a3a7415 docs: quickstart models (#18511) 2024-03-04 08:33:19 -08:00
aditya thomas
a727eec6ed docs: add groq to list of providers (#18503)
**Description:** Add Groq to the list of providers
**Issue:** None
**Dependencies:** None
2024-03-04 08:20:40 -08:00
Erick Friis
24f9c700f2 anthropic[minor]: claude 3 (#18508) 2024-03-04 15:03:51 +00:00
William De Vena
172499404a Docs: Updated callbacks/index.mdx adding example on invoke method (#18403)
## PR title
Docs: Updated callbacks/index.mdx adding example on runnable methods

## PR message
- **Description:** Updated callbacks/index.mdx adding an example on how
to pass callbacks to the runnable methods (invoke, batch, ...)
- **Issue:** #16379
- **Dependencies:** None
2024-03-04 09:11:48 -05:00
Jacob Lee
de2d9447c6 👥 Update LangChain people data (#18473)
👥 Update LangChain people data

Co-authored-by: github-actions <github-actions@github.com>
2024-03-03 19:58:58 -08:00
William FH
1cdb813196 Improve notebook wording (#18472) 2024-03-03 18:31:15 -08:00
William FH
1eec67e8fe Evaluate on Version (#18471) 2024-03-03 17:47:35 -08:00
William FH
55b69d5ad1 Update Notebook Image (#18470) 2024-03-03 17:22:59 -08:00
Harrison Chase
73d653324f [Evals] Session-level feedback (#18463)
Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
2024-03-03 17:18:29 -08:00
Scott Nath
b051bba1a9 community: Add you.com tool, add async to retriever, add async testing, add You tool doc (#18032)
- **Description:** finishes adding the you.com functionality including:
    - add async functions to utility and retriever
    - add the You.com Tool
    - add async testing for utility, retriever, and tool
    - add a tool integration notebook page
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** @scottnath
2024-03-03 14:30:05 -08:00
mackong
b89d9fc177 langchain[patch]: add tools renderer for various non-openai agents (#18307)
- **Description:** add tools_renderer for various non-openai agents,
make tools can be render in different ways for your LLM.
  - **Issue:** N/A
  - **Dependencies:** N/A

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-03-03 14:25:12 -08:00
Harrison Chase
7ce2f32c64 improve query analysis docs (#18426) 2024-03-03 14:24:33 -08:00
William De Vena
a63cee04ac nvidia-trt[patch]: Invoke callback prior to yielding token (#18446)
## PR title
nvidia-trt[patch]: Invoke callback prior to yielding

## PR message
- Description: Invoke on_llm_new_token callback prior to yielding token
in
_stream method.
- Issue: https://github.com/langchain-ai/langchain/issues/16913
- Dependencies: None
2024-03-03 14:15:11 -08:00
William De Vena
275877980e community[patch]: Invoke callback prior to yielding token (#18447)
## PR title
community[patch]: Invoke callback prior to yielding token

## PR message
Description: Invoke callback prior to yielding token in _stream method
in llms/vertexai.
Issue: https://github.com/langchain-ai/langchain/issues/16913
Dependencies: None
2024-03-03 14:14:40 -08:00
William De Vena
67375e96e0 community[patch]: Invoke callback prior to yielding token (#18448)
## PR title
community[patch]: Invoke callback prior to yielding token

## PR message
- Description: Invoke callback prior to yielding token in _stream method
in llms/tongyi.
- Issue: https://github.com/langchain-ai/langchain/issues/16913
- Dependencies: None
2024-03-03 14:14:22 -08:00
William De Vena
2087cbae64 community[patch]: Invoke callback prior to yielding token (#18449)
## PR title
community[patch]: Invoke callback prior to yielding token

## PR message
- Description: Invoke callback prior to yielding token in _stream method
in chat_models/perplexity.
- Issue: https://github.com/langchain-ai/langchain/issues/16913
- Dependencies: None
2024-03-03 14:14:00 -08:00
William De Vena
eb04d0d3e2 community[patch]: Invoke callback prior to yielding token (#18452)
## PR title
community[patch]: Invoke callback prior to yielding token

## PR message
- Description: Invoke callback prior to yielding token in _stream and
_astream methods in llms/anthropic.
- Issue: https://github.com/langchain-ai/langchain/issues/16913
- Dependencies: None
2024-03-03 14:13:41 -08:00
William De Vena
371bec79bc community[patch]: Invoke callback prior to yielding token (#18454)
## PR title
community[patch]: Invoke callback prior to yielding token

## PR message
- Description: Invoke callback prior to yielding token in _stream and
_astream methods in llms/baidu_qianfan_endpoint.
- Issue: https://github.com/langchain-ai/langchain/issues/16913
- Dependencies: None
2024-03-03 14:13:22 -08:00
Aayush Kataria
7c2f3f6f95 community[minor]: Adding Azure Cosmos Mongo vCore Vector DB Cache (#16856)
Description:

This pull request introduces several enhancements for Azure Cosmos
Vector DB, primarily focused on improving caching and search
capabilities using Azure Cosmos MongoDB vCore Vector DB. Here's a
summary of the changes:

- **AzureCosmosDBSemanticCache**: Added a new cache implementation
called AzureCosmosDBSemanticCache, which utilizes Azure Cosmos MongoDB
vCore Vector DB for efficient caching of semantic data. Added
comprehensive test cases for AzureCosmosDBSemanticCache to ensure its
correctness and robustness. These tests cover various scenarios and edge
cases to validate the cache's behavior.
- **HNSW Vector Search**: Added HNSW vector search functionality in the
CosmosDB Vector Search module. This enhancement enables more efficient
and accurate vector searches by utilizing the HNSW (Hierarchical
Navigable Small World) algorithm. Added corresponding test cases to
validate the HNSW vector search functionality in both
AzureCosmosDBSemanticCache and AzureCosmosDBVectorSearch. These tests
ensure the correctness and performance of the HNSW search algorithm.
- **LLM Caching Notebook** - The notebook now includes a comprehensive
example showcasing the usage of the AzureCosmosDBSemanticCache. This
example highlights how the cache can be employed to efficiently store
and retrieve semantic data. Additionally, the example provides default
values for all parameters used within the AzureCosmosDBSemanticCache,
ensuring clarity and ease of understanding for users who are new to the
cache implementation.
 
 @hwchase17,@baskaryan, @eyurtsev,
2024-03-03 14:04:15 -08:00
Bagatur
db47b5deee docs: anthropic quickstart (#18440) 2024-03-03 13:59:28 -08:00
Bagatur
74f3908182 docs: anthropic qa quickstart (#18459) 2024-03-03 13:33:24 -08:00
Harrison Chase
bc768a12ed more query analysis docs (#18358) 2024-03-02 08:44:22 -08:00
Erick Friis
f96dd57501 langchain[patch]: release 0.1.10 (#18410) 2024-03-02 01:48:57 +00:00
Erick Friis
1fd1ac8e95 community[patch]: release 0.0.25 (#18408) 2024-03-02 00:56:04 +00:00
aditya thomas
44b33fcc76 infra: update to pathspec for 'git grep' in lint check (#18178)
**Description:** Update to the pathspec for 'git grep' in lint check in
the Makefile
**Issue:** The pathspec {docs/docs,templates,cookbook} is not handled
correctly leading to the error during 'make lint' -
"fatal: ambiguous argument '{docs/docs,templates,cookbook}': unknown
revision or path not in the working tree."
See changes made in https://github.com/langchain-ai/langchain/pull/18058

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-01 22:03:45 +00:00
standby24x7
57c733e560 docs: Fix spelling typos in apache_kafka notebook (#17998)
This patch fixes some spelling typos in
apache_kafka_message_handling.ipynb

Signed-off-by: Masanari Iida <standby24x7@gmail.com>
2024-03-01 13:58:04 -08:00
Erick Friis
9fda6ac7e6 docs: stop copying source (#18404) 2024-03-01 13:57:53 -08:00
Sourav Pradhan
50abeb7ed9 community[patch]: fix Chroma add_images (#17964)
###  Description

Fixed a small bug in chroma.py add_images(), previously whenever we are
not passing metadata the documents is containing the base64 of the uris
passed, but when we are passing the metadata the documents is containing
normal string uris which should not be the case.

### Issue

In add_images() method when we are calling upsert() we have to use
"b64_texts" instead of normal string "uris".

### Twitter handle

https://twitter.com/whitepegasus01
2024-03-01 21:55:58 +00:00
Sanjaypranav V M
d722525c70 templates: remove gemini_function_agent unused file (#18112)
- [X] Gemini Agent Executor imported `agent.py` has Gemini agent
executor which was not utilised in current template of gemini function
agent 🧑‍💻 instead openai_function_agent has been used


@sbusso  @jarib  please someone review it

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-01 21:55:20 +00:00
Kate Silverstein
b7c71e2e07 community[minor]: llamafile embeddings support (#17976)
* **Description:** adds `LlamafileEmbeddings` class implementation for
generating embeddings using
[llamafile](https://github.com/Mozilla-Ocho/llamafile)-based models.
Includes related unit tests and notebook showing example usage.
* **Issue:** N/A
* **Dependencies:** N/A
2024-03-01 13:49:18 -08:00
Massimiliano Pronesti
c3c987dd70 docs: update Azure OpenAI to v1 and langchain API to 0.1 (#18005)
**Description:** Updated Azure OpenAI docs to OpenAI API v1 and LLM
invocation to langchain 0.1
2024-03-01 13:47:00 -08:00
Mateusz Szewczyk
9298a0b941 langchain_ibm[patch] update docstring, dependencies, tests (#18386)
- **Description:** Update docstring, dependencies, tests, README
- **Dependencies:**
[ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),
  - **Tag maintainer:** : 

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally -> 
Please make sure integration_tests passing locally -> 

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-01 21:01:53 +00:00
Jib
c2b1abe91b mongodb[patch]: Set delete_many only if count_documents is not 0 (#18402)
- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Remove the assert statement on the `count_documents`
in setup_class. It should just delete if there are documents present
    - **Issue:** the issue # Crashes on class setup
    - **Dependencies:** None
    - **Twitter handle:** @mongodb


- [x] **Add tests and docs**: If you're adding a new integration, please
include
  1. N/A


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

Co-authored-by: Jib <jib@byblack.us>
2024-03-01 13:01:28 -08:00
Kate Silverstein
c9153a3fd4 docs: add llamafile info to 'Local LLMs' guides (#18049)
- **Description:** add information about
[llamafile](https://github.com/Mozilla-Ocho/llamafile) (setup, example
usage) to ['Run LLMs
locally'](https://python.langchain.com/docs/guides/local_llms) and
['Using local models for Q&A with
RAG'](https://python.langchain.com/docs/use_cases/question_answering/local_retrieval_qa)
guides.
- **Issue:** N/A
- **Dependencies:** N/A
2024-03-01 12:44:31 -08:00
Tomaz Bratanic
f6bfb969ba community[patch]: Add an option for indexed generic label when import neo4j graph documents (#18122)
Current implementation doesn't have an indexed property that would
optimize the import. I have added a `baseEntityLabel` parameter that
allows you to add a secondary node label, which has an indexed id
`property`. By default, the behaviour is identical to previous version.

Since multi-labeled nodes are terrible for text2cypher, I removed the
secondary label from schema representation object and string, which is
used in text2cypher.
2024-03-01 12:33:52 -08:00
aditya thomas
e6e60e2492 docs: ChatOpenAI update module import path and calling method (#18169)
**Description:**
(a) Update to the module import path to reflect the splitting up of
langchain into separate packages
(b) Update to the documentation to include the new calling method
(invoke)
2024-03-01 12:32:20 -08:00
Arun Sathiya
4adac20d7b community[patch]: Make cohere_api_key a SecretStr (#12188)
This PR makes `cohere_api_key` in `llms/cohere` a SecretStr, so that the
API Key is not leaked when `Cohere.cohere_api_key` is represented as a
string.

---------

Signed-off-by: Arun <arun@arun.blog>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-01 20:27:53 +00:00
Ryan Meinzer
d883fd4a37 docs: Correct WebBaseLoader URL: docs: python.langchain.com/docs/get_started/quickstartQuickstart (#17981)
**Description:** 
The URL of the data to index, specified to `WebBaseLoader` to import is
incorrect, causing the `langsmith_search` retriever to return a `404:
NOT_FOUND`.
Incorrect URL: https://docs.smith.langchain.com/overview
Correct URL: https://docs.smith.langchain.com

**Issue:** 
This commit corrects the URL and prevents the LangServe Playground from
returning an error from its inability to use the retriever when
inquiring, "how can langsmith help with testing?".

**Dependencies:** 
None.

**Twitter Handle:** 
@ryanmeinzer
2024-03-01 12:21:53 -08:00
Petteri Johansson
6c1989d292 community[minor], langchain[minor], docs: Gremlin Graph Store and QA Chain (#17683)
- **Description:** 
New feature: Gremlin graph-store and QA chain (including docs).
Compatible with Azure CosmosDB.
  - **Dependencies:** 
  no changes
2024-03-01 12:21:14 -08:00
Ather Fawaz
a5ccf5d33c community[minor]: Add support for Perplexity chat model(#17024)
- **Description:** This PR adds support for [Perplexity AI
APIs](https://blog.perplexity.ai/blog/introducing-pplx-api).
  - **Issues:** None
  - **Dependencies:** None
  - **Twitter handle:** [@atherfawaz](https://twitter.com/AtherFawaz)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-01 12:19:23 -08:00
Rodrigo Nogueira
3438d2cbcc community[minor]: add maritalk chat (#17675)
**Description:** Adds the MariTalk chat that is based on a LLM specially
trained for Portuguese.

**Twitter handle:** @MaritacaAI
2024-03-01 12:18:23 -08:00
sarahberenji
08fa38d56d community[patch]: the syntax error for Redis generated query (#17717)
To fix the reported error:
https://github.com/langchain-ai/langchain/discussions/17397
2024-03-01 12:18:10 -08:00
certified-dodo
43e3244573 community[patch]: Fix MongoDBAtlasVectorSearch max_marginal_relevance_search (#17971)
Description:
* `self._embedding_key` is accessed after deletion, breaking
`max_marginal_relevance_search` search
* Introduced in:
e135e5257c
* Updated but still persists in:
ce22e10c4b

Issue: https://github.com/langchain-ai/langchain/issues/17963

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-01 12:17:42 -08:00
Nikita Titov
9f2ab37162 community[patch]: don't try to parse json in case of errored response (#18317)
Related issue: #13896.

In case Ollama is behind a proxy, proxy error responses cannot be
viewed. You aren't even able to check response code.

For example, if your Ollama has basic access authentication and it's not
passed, `JSONDecodeError` will overwrite the truth response error.

<details>
<summary><b>Log now:</b></summary>

```
{
	"name": "JSONDecodeError",
	"message": "Expecting value: line 1 column 1 (char 0)",
	"stack": "---------------------------------------------------------------------------
JSONDecodeError                           Traceback (most recent call last)
File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/requests/models.py:971, in Response.json(self, **kwargs)
    970 try:
--> 971     return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError

File /opt/miniforge3/envs/.gpt/lib/python3.10/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
    343 if (cls is None and object_hook is None and
    344         parse_int is None and parse_float is None and
    345         parse_constant is None and object_pairs_hook is None and not kw):
--> 346     return _default_decoder.decode(s)
    347 if cls is None:

File /opt/miniforge3/envs/.gpt/lib/python3.10/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
    333 \"\"\"Return the Python representation of ``s`` (a ``str`` instance
    334 containing a JSON document).
    335 
    336 \"\"\"
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    338 end = _w(s, end).end()

File /opt/miniforge3/envs/.gpt/lib/python3.10/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
    354 except StopIteration as err:
--> 355     raise JSONDecodeError(\"Expecting value\", s, err.value) from None
    356 return obj, end

JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

JSONDecodeError                           Traceback (most recent call last)
Cell In[3], line 1
----> 1 print(translate_func().invoke('text'))

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/runnables/base.py:2053, in RunnableSequence.invoke(self, input, config)
   2051 try:
   2052     for i, step in enumerate(self.steps):
-> 2053         input = step.invoke(
   2054             input,
   2055             # mark each step as a child run
   2056             patch_config(
   2057                 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
   2058             ),
   2059         )
   2060 # finish the root run
   2061 except BaseException as e:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:165, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    154 def invoke(
    155     self,
    156     input: LanguageModelInput,
   (...)
    160     **kwargs: Any,
    161 ) -> BaseMessage:
    162     config = ensure_config(config)
    163     return cast(
    164         ChatGeneration,
--> 165         self.generate_prompt(
    166             [self._convert_input(input)],
    167             stop=stop,
    168             callbacks=config.get(\"callbacks\"),
    169             tags=config.get(\"tags\"),
    170             metadata=config.get(\"metadata\"),
    171             run_name=config.get(\"run_name\"),
    172             **kwargs,
    173         ).generations[0][0],
    174     ).message

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:543, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    535 def generate_prompt(
    536     self,
    537     prompts: List[PromptValue],
   (...)
    540     **kwargs: Any,
    541 ) -> LLMResult:
    542     prompt_messages = [p.to_messages() for p in prompts]
--> 543     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:407, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    405         if run_managers:
    406             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 407         raise e
    408 flattened_outputs = [
    409     LLMResult(generations=[res.generations], llm_output=res.llm_output)
    410     for res in results
    411 ]
    412 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:397, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    394 for i, m in enumerate(messages):
    395     try:
    396         results.append(
--> 397             self._generate_with_cache(
    398                 m,
    399                 stop=stop,
    400                 run_manager=run_managers[i] if run_managers else None,
    401                 **kwargs,
    402             )
    403         )
    404     except BaseException as e:
    405         if run_managers:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:576, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    572     raise ValueError(
    573         \"Asked to cache, but no cache found at `langchain.cache`.\"
    574     )
    575 if new_arg_supported:
--> 576     return self._generate(
    577         messages, stop=stop, run_manager=run_manager, **kwargs
    578     )
    579 else:
    580     return self._generate(messages, stop=stop, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:250, in ChatOllama._generate(self, messages, stop, run_manager, **kwargs)
    226 def _generate(
    227     self,
    228     messages: List[BaseMessage],
   (...)
    231     **kwargs: Any,
    232 ) -> ChatResult:
    233     \"\"\"Call out to Ollama's generate endpoint.
    234 
    235     Args:
   (...)
    247             ])
    248     \"\"\"
--> 250     final_chunk = self._chat_stream_with_aggregation(
    251         messages,
    252         stop=stop,
    253         run_manager=run_manager,
    254         verbose=self.verbose,
    255         **kwargs,
    256     )
    257     chat_generation = ChatGeneration(
    258         message=AIMessage(content=final_chunk.text),
    259         generation_info=final_chunk.generation_info,
    260     )
    261     return ChatResult(generations=[chat_generation])

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:183, in ChatOllama._chat_stream_with_aggregation(self, messages, stop, run_manager, verbose, **kwargs)
    174 def _chat_stream_with_aggregation(
    175     self,
    176     messages: List[BaseMessage],
   (...)
    180     **kwargs: Any,
    181 ) -> ChatGenerationChunk:
    182     final_chunk: Optional[ChatGenerationChunk] = None
--> 183     for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
    184         if stream_resp:
    185             chunk = _chat_stream_response_to_chat_generation_chunk(stream_resp)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:156, in ChatOllama._create_chat_stream(self, messages, stop, **kwargs)
    147 def _create_chat_stream(
    148     self,
    149     messages: List[BaseMessage],
    150     stop: Optional[List[str]] = None,
    151     **kwargs: Any,
    152 ) -> Iterator[str]:
    153     payload = {
    154         \"messages\": self._convert_messages_to_ollama_messages(messages),
    155     }
--> 156     yield from self._create_stream(
    157         payload=payload, stop=stop, api_url=f\"{self.base_url}/api/chat/\", **kwargs
    158     )

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/llms/ollama.py:234, in _OllamaCommon._create_stream(self, api_url, payload, stop, **kwargs)
    228         raise OllamaEndpointNotFoundError(
    229             \"Ollama call failed with status code 404. \"
    230             \"Maybe your model is not found \"
    231             f\"and you should pull the model with `ollama pull {self.model}`.\"
    232         )
    233     else:
--> 234         optional_detail = response.json().get(\"error\")
    235         raise ValueError(
    236             f\"Ollama call failed with status code {response.status_code}.\"
    237             f\" Details: {optional_detail}\"
    238         )
    239 return response.iter_lines(decode_unicode=True)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/requests/models.py:975, in Response.json(self, **kwargs)
    971     return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
--> 975     raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)

JSONDecodeError: Expecting value: line 1 column 1 (char 0)"
}
```

</details>


<details>

<summary><b>Log after a fix:</b></summary>

```
{
	"name": "ValueError",
	"message": "Ollama call failed with status code 401. Details: <html>\r
<head><title>401 Authorization Required</title></head>\r
<body>\r
<center><h1>401 Authorization Required</h1></center>\r
<hr><center>nginx/1.18.0 (Ubuntu)</center>\r
</body>\r
</html>\r
",
	"stack": "---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[2], line 1
----> 1 print(translate_func().invoke('text'))

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/runnables/base.py:2053, in RunnableSequence.invoke(self, input, config)
   2051 try:
   2052     for i, step in enumerate(self.steps):
-> 2053         input = step.invoke(
   2054             input,
   2055             # mark each step as a child run
   2056             patch_config(
   2057                 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
   2058             ),
   2059         )
   2060 # finish the root run
   2061 except BaseException as e:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:165, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    154 def invoke(
    155     self,
    156     input: LanguageModelInput,
   (...)
    160     **kwargs: Any,
    161 ) -> BaseMessage:
    162     config = ensure_config(config)
    163     return cast(
    164         ChatGeneration,
--> 165         self.generate_prompt(
    166             [self._convert_input(input)],
    167             stop=stop,
    168             callbacks=config.get(\"callbacks\"),
    169             tags=config.get(\"tags\"),
    170             metadata=config.get(\"metadata\"),
    171             run_name=config.get(\"run_name\"),
    172             **kwargs,
    173         ).generations[0][0],
    174     ).message

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:543, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    535 def generate_prompt(
    536     self,
    537     prompts: List[PromptValue],
   (...)
    540     **kwargs: Any,
    541 ) -> LLMResult:
    542     prompt_messages = [p.to_messages() for p in prompts]
--> 543     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:407, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    405         if run_managers:
    406             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 407         raise e
    408 flattened_outputs = [
    409     LLMResult(generations=[res.generations], llm_output=res.llm_output)
    410     for res in results
    411 ]
    412 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:397, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    394 for i, m in enumerate(messages):
    395     try:
    396         results.append(
--> 397             self._generate_with_cache(
    398                 m,
    399                 stop=stop,
    400                 run_manager=run_managers[i] if run_managers else None,
    401                 **kwargs,
    402             )
    403         )
    404     except BaseException as e:
    405         if run_managers:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:576, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    572     raise ValueError(
    573         \"Asked to cache, but no cache found at `langchain.cache`.\"
    574     )
    575 if new_arg_supported:
--> 576     return self._generate(
    577         messages, stop=stop, run_manager=run_manager, **kwargs
    578     )
    579 else:
    580     return self._generate(messages, stop=stop, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:250, in ChatOllama._generate(self, messages, stop, run_manager, **kwargs)
    226 def _generate(
    227     self,
    228     messages: List[BaseMessage],
   (...)
    231     **kwargs: Any,
    232 ) -> ChatResult:
    233     \"\"\"Call out to Ollama's generate endpoint.
    234 
    235     Args:
   (...)
    247             ])
    248     \"\"\"
--> 250     final_chunk = self._chat_stream_with_aggregation(
    251         messages,
    252         stop=stop,
    253         run_manager=run_manager,
    254         verbose=self.verbose,
    255         **kwargs,
    256     )
    257     chat_generation = ChatGeneration(
    258         message=AIMessage(content=final_chunk.text),
    259         generation_info=final_chunk.generation_info,
    260     )
    261     return ChatResult(generations=[chat_generation])

File /storage/gpt-project/Repos/repo_nikita/gpt_lib/langchain/ollama.py:328, in ChatOllamaCustom._chat_stream_with_aggregation(self, messages, stop, run_manager, verbose, **kwargs)
    319 def _chat_stream_with_aggregation(
    320     self,
    321     messages: List[BaseMessage],
   (...)
    325     **kwargs: Any,
    326 ) -> ChatGenerationChunk:
    327     final_chunk: Optional[ChatGenerationChunk] = None
--> 328     for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
    329         if stream_resp:
    330             chunk = _chat_stream_response_to_chat_generation_chunk(stream_resp)

File /storage/gpt-project/Repos/repo_nikita/gpt_lib/langchain/ollama.py:301, in ChatOllamaCustom._create_chat_stream(self, messages, stop, **kwargs)
    292 def _create_chat_stream(
    293     self,
    294     messages: List[BaseMessage],
    295     stop: Optional[List[str]] = None,
    296     **kwargs: Any,
    297 ) -> Iterator[str]:
    298     payload = {
    299         \"messages\": self._convert_messages_to_ollama_messages(messages),
    300     }
--> 301     yield from self._create_stream(
    302         payload=payload, stop=stop, api_url=f\"{self.base_url}/api/chat\", **kwargs
    303     )

File /storage/gpt-project/Repos/repo_nikita/gpt_lib/langchain/ollama.py:134, in _OllamaCommonCustom._create_stream(self, api_url, payload, stop, **kwargs)
    132     else:
    133         optional_detail = response.text
--> 134         raise ValueError(
    135             f\"Ollama call failed with status code {response.status_code}.\"
    136             f\" Details: {optional_detail}\"
    137         )
    138 return response.iter_lines(decode_unicode=True)

ValueError: Ollama call failed with status code 401. Details: <html>\r
<head><title>401 Authorization Required</title></head>\r
<body>\r
<center><h1>401 Authorization Required</h1></center>\r
<hr><center>nginx/1.18.0 (Ubuntu)</center>\r
</body>\r
</html>\r
"
}
```

</details>

The same is true for timeout errors or when you simply mistyped in
`base_url` arg and get response from some other service, for instance.

Real Ollama errors are still clearly readable:

```
ValueError: Ollama call failed with status code 400. Details: {"error":"invalid options: unknown_option"}
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-01 12:17:29 -08:00
Yudhajit Sinha
e2b901c35b community[patch]: chat message histrory mypy fix (#18250)
Description: Fixed type: ignore's for mypy for
chat_message_histories(streamlit)
Adresses #17048 

Planning to add more based on reviews
2024-03-01 12:17:18 -08:00
Gabriel Altay
b9416dc96a docs: update pinecone README to use PineconeVectorStore (#18170) 2024-03-01 12:12:52 -08:00
老阿張
1701f7b8e9 docs: Fix typo in baidu_qianfan_endpoint.ipynb & baidu_qianfan_endpoint.ipynb (#18176)
Description: "sucessfully should be successfully "? 🤔
Issue: Typo
Dependencies: Nope
Twitter handle: laoazhang
2024-03-01 12:10:23 -08:00
Hemslo Wang
58a2abf089 community[patch]: fix RecursiveUrlLoader metadata_extractor return type (#18193)
**Description:** Fix `metadata_extractor` type for `RecursiveUrlLoader`,
the default `_metadata_extractor` returns `dict` instead of `str`.
**Issue:** N/A
**Dependencies:** N/A
**Twitter handle:** N/A

Signed-off-by: Hemslo Wang <hemslo.wang@gmail.com>
2024-03-01 12:08:20 -08:00
Maxime Perrin
98380cff9b community[patch]: removing "response_mode" parameter in llama_index retriever (#18180)
- **Description:** Removing this line 
```python
response = index.query(query, response_mode="no_text", **self.query_kwargs)
```
to 
```python
response = index.query(query, **self.query_kwargs)
```
Since llama index query does not support response_mode anymore : ``` |
TypeError: BaseQueryEngine.query() got an unexpected keyword argument
'response_mode'````
  - **Twitter handle:** @maximeperrin_

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
2024-03-01 12:05:09 -08:00
Leonid Kuligin
e080281623 docs: cookbook on gemma integrations (#18213)
- [ ] **PR title**: "cookbook: using Gemma on LangChain"

- [ ] **PR message**: 
- **Description:** added a tutorial how to use Gemma with LangChain
(from VertexAI or locally from Kaggle or HF)
    - **Dependencies:** langchain-google-vertexai==0.0.7
    - **Twitter handle:** lkuligin
2024-03-01 11:50:55 -08:00
Christophe Bornet
177f51c7bd community: Use default load() implementation in doc loaders (#18385)
Following https://github.com/langchain-ai/langchain/pull/18289
2024-03-01 14:46:52 -05:00
William De Vena
42341bc787 infra: fake model invoke callback prior to yielding token (#18286)
## PR title
core[patch]: Invoke callback prior to yielding

## PR message
Description: Invoke on_llm_new_token callback prior to yielding token in
_stream and _astream methods.
Issue: https://github.com/langchain-ai/langchain/issues/16913
Dependencies: None
Twitter handle: None
2024-03-01 11:46:18 -08:00
Ikko Eltociear Ashimine
31b4e78174 docs: fix typo in milvus.ipynb (#18373)
retreival -> retrieval
2024-03-01 11:22:39 -08:00
Tabby
dd6f85caf1 docs: Update Google El Carro for Oracle Workload Documentation. (#18394)
In this commit we update the documentation for Google El Carro for Oracle Workloads. We amend the documentation in the Google Providers page to use the correct name which is El Carro for Oracle Workloads. We also add changes to the document_loaders and memory pages to reflect changes we made in our repo.
2024-03-01 11:21:35 -08:00
mwmajewsk
e192f6b6eb community[patch]: fix, better error message in deeplake vectoriser (#18397)
If the document loader recieves Pathlib path instead of str, it reads
the file correctly, but the problem begins when the document is added to
Deeplake.
This problem arises from casting the path to str in the metadata.

```python
deeplake = True
fname = Path('./lorem_ipsum.txt')
loader = TextLoader(fname, encoding="utf-8")
docs = loader.load_and_split()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
chunks= text_splitter.split_documents(docs)
if deeplake:
    db = DeepLake(dataset_path=ds_path, embedding=embeddings, token=activeloop_token)
    db.add_documents(chunks)
else:
    db = Chroma.from_documents(docs, embeddings)
```

So using this snippet of code the error message for deeplake looks like
this:

```
[part of error message omitted]

Traceback (most recent call last):
  File "/home/mwm/repositories/sources/fixing_langchain/main.py", line 53, in <module>
    db.add_documents(chunks)
  File "/home/mwm/repositories/sources/langchain/libs/core/langchain_core/vectorstores.py", line 139, in add_documents
    return self.add_texts(texts, metadatas, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mwm/repositories/sources/langchain/libs/community/langchain_community/vectorstores/deeplake.py", line 258, in add_texts
    return self.vectorstore.add(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/vectorstore/deeplake_vectorstore.py", line 226, in add
    return self.dataset_handler.add(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/vectorstore/dataset_handlers/client_side_dataset_handler.py", line 139, in add
    dataset_utils.extend_or_ingest_dataset(
  File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/vectorstore/vector_search/dataset/dataset.py", line 544, in extend_or_ingest_dataset
    extend(
  File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/vectorstore/vector_search/dataset/dataset.py", line 505, in extend
    dataset.extend(batched_processed_tensors, progressbar=False)
  File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/dataset/dataset.py", line 3247, in extend
    raise SampleExtendError(str(e)) from e.__cause__
deeplake.util.exceptions.SampleExtendError: Failed to append a sample to the tensor 'metadata'. See more details in the traceback. If you wish to skip the samples that cause errors, please specify `ignore_errors=True`.
```

Which is does not explain the error well enough.
The same error for chroma looks like this 

```
During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/mwm/repositories/sources/fixing_langchain/main.py", line 56, in <module>
    db = Chroma.from_documents(docs, embeddings)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mwm/repositories/sources/langchain/libs/community/langchain_community/vectorstores/chroma.py", line 778, in from_documents
    return cls.from_texts(
           ^^^^^^^^^^^^^^^
  File "/home/mwm/repositories/sources/langchain/libs/community/langchain_community/vectorstores/chroma.py", line 736, in from_texts
    chroma_collection.add_texts(
  File "/home/mwm/repositories/sources/langchain/libs/community/langchain_community/vectorstores/chroma.py", line 309, in add_texts
    raise ValueError(e.args[0] + "\n\n" + msg)
ValueError: Expected metadata value to be a str, int, float or bool, got lorem_ipsum.txt which is a <class 'pathlib.PosixPath'>

Try filtering complex metadata from the document using langchain_community.vectorstores.utils.filter_complex_metadata.
```

Which is way more user friendly, so I just added information about
possible mismatch of the type in the error message, the same way it is
covered in chroma
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/chroma.py#L224
2024-03-01 11:21:21 -08:00
Daniel Chico
7d962278f6 community[patch]: type ignore fixes (#18395)
Related to #17048
2024-03-01 11:21:02 -08:00
Christophe Bornet
69be82c86d community[patch]: Implement lazy_load() for CSVLoader (#18391)
Covered by `test_csv_loader.py`
2024-03-01 11:17:08 -08:00
Bagatur
c54d6eb5da fireworks[patch]: support "any" tool_choice (#18343)
per https://readme.fireworks.ai/docs/function-calling
2024-03-01 11:12:28 -08:00
Leonid Ganeline
d937fa4f9c docs: Tutorials update (#18230)
A big update of the `Tutorials` page. Cleaned it up. Added several new
resources.
2024-03-01 11:07:39 -08:00
Erick Friis
6afb135baa astradb: move to langchain-datastax repo (#18354) 2024-03-01 19:04:43 +00:00
Akash A Desai
b641be2edf templates: Lanceb RAG template (#17809)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-01 18:52:50 +00:00
Guangdong Liu
760a16ff32 community[patch]: Fix ChatModel for sparkllm Bug. (#18375)
**PR message**: ***Delete this entire checklist*** and replace with
    - **Description:** fix sparkllm paramer error
    - **Issue:**   close #18370
- **Dependencies:** change `IFLYTEK_SPARK_APP_URL` to
`IFLYTEK_SPARK_API_URL`
    - **Twitter handle:** No
2024-03-01 10:49:30 -08:00
Yujie Qian
cbb65741a7 community[patch]: Voyage AI updates default model and batch size (#17655)
- **Description:** update the default model and batch size in
VoyageEmbeddings
    - **Issue:** N/A
    - **Dependencies:** N/A
    - **Twitter handle:** N/A

---------

Co-authored-by: fodizoltan <zoltan@conway.expert>
2024-03-01 10:22:24 -08:00
Shengsheng Huang
ae471a7dcb community[minor]: add BigDL-LLM integrations (#17953)
- **Description**:
[`bigdl-llm`](https://github.com/intel-analytics/BigDL) is a library for
running LLM on Intel XPU (from Laptop to GPU to Cloud) using
INT4/FP4/INT8/FP8 with very low latency (for any PyTorch model). This PR
adds bigdl-llm integrations to langchain.
- **Issue**: NA
- **Dependencies**: `bigdl-llm` library
- **Contribution maintainer**: @shane-huang 
 
Examples added:
- docs/docs/integrations/llms/bigdl.ipynb
2024-03-01 10:04:53 -08:00
Ethan Yang
f61cb8d407 community[minor]: Add openvino backend support (#11591)
- **Description:** add openvino backend support by HuggingFace Optimum
Intel,
  - **Dependencies:** “optimum[openvino]”,

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-01 10:04:24 -08:00
Leonid Ganeline
a89f007947 docs: runnable module description (#17966)
Added a module description. Added `batch` description.
2024-03-01 10:01:32 -08:00
Leonid Ganeline
6d0af4e805 docs: nvidia: provider page update (#18054)
Nvidia provider page is missing a Triton Inference Server package
reference.
Changes:
- added the Triton Inference Server reference
- copied the example notebook from the package into the doc files.
- added the Triton Inference Server description and links, the link to
the above example notebook
- formatted page to the consistent format

NOTE:
It seems that the [example
notebook](https://github.com/langchain-ai/langchain/blob/master/libs/partners/nvidia-trt/docs/llms.ipynb)
was originally created in wrong place. It should be in the LangChain
docs
[here](https://github.com/langchain-ai/langchain/tree/master/docs/docs/integrations/llms).
So, I've created a copy of this example. The original example is still
in the nvidia-trt package.
2024-03-01 10:00:42 -08:00
RadhikaBansal97
8bafd2df5e community[patch]: Change github endpoint in GithubLoader (#17622)
Description- 
- Changed the GitHub endpoint as existing was not working and giving 404
not found error
- Also the existing function was failing if file_filter is not passed as
the tree api return all paths including directory as well, and when
get_file_content was iterating over these path, the function was failing
for directory as the api was returning list of files inside the
directory, so added a condition to ignore the paths if it a directory
- Fixes this issue -
https://github.com/langchain-ai/langchain/issues/17453

Co-authored-by: Radhika Bansal <Radhika.Bansal@veritas.com>
2024-03-01 09:36:31 -08:00
Yufei (Benny) Chen
2b93206f02 fireworks[patch]: Fix fireworks async stream (#18372)
- **Description:**  Fix the async stream issue with Fireworks
- **Dependencies:** fireworks >= 0.13.0

```
tests/integration_tests/test_chat_models.py ..........                                                                   [ 45%]
tests/integration_tests/test_compile.py .                                                                                [ 50%]
tests/integration_tests/test_embeddings.py ..                                                                            [ 59%]
tests/integration_tests/test_llms.py .........                                                                           [100%]
```
```
tests/unit_tests/test_embeddings.py .                                                                                    [ 16%]
tests/unit_tests/test_imports.py .                                                                                       [ 33%]
tests/unit_tests/test_llms.py ....                                                                                       [100%]
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-01 09:20:26 -08:00
William FH
1deb8cadd5 Add dataset version info (#18299) 2024-02-29 22:00:44 -08:00
Anush
9d663f31fa community[patch]: FastEmbed to latest (#18040)
## Description

Updates the `langchain_community.embeddings.fastembed` provider as per
the recent updates to [`FastEmbed`](https://github.com/qdrant/fastembed)
library.
2024-02-29 21:15:51 -08:00
Jacob Lee
590d47bff4 docs[patch]: Add Neo4j GraphAcademy to tutorials section (#18353) 2024-02-29 20:50:24 -07:00
Erick Friis
3c8a115e21 fireworks[patch]: remove custom async and stream implementations (#18363) 2024-03-01 03:20:02 +00:00
Bagatur
4730ee2766 docs: update api ref nav (#18362) 2024-02-29 19:04:56 -08:00
Bagatur
12f19b8a6a infra: update create_api_rst (#18361) 2024-02-29 19:04:44 -08:00
Erick Friis
1317578ad1 templates: use langchain-text-splitters (#18360)
- deps
- import
- import
2024-03-01 03:00:58 +00:00
Bagatur
f220af3dce docs: text splitters readme (#18359) 2024-03-01 03:00:42 +00:00
Bagatur
0d7fb5f60a langchain[patch]: langchain-text-splitters dep (#18357) 2024-02-29 18:48:55 -08:00
Eugene Yurtsev
51b661cfe8 community[patch]: BaseLoader load method should just delegate to lazy_load (#18289)
load() should just reference lazy_load()
2024-02-29 21:45:28 -05:00
Bagatur
5efb5c099f text-splitters[minor], langchain[minor], community[patch], templates, docs: langchain-text-splitters 0.0.1 (#18346) 2024-02-29 18:33:21 -08:00
Nuno Campos
7891934173 Fix missing labels (#18356)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-02-29 18:11:18 -08:00
William FH
fdab931fd3 [Core] Patch: rm dumpd of outputs from runnables/base (#18295)
It obstructs evaluations when your return a pydantic object.
2024-02-29 18:04:53 -08:00
Erick Friis
c7d5ed6f5c infra: tolerate partner package move in ci (#18355) 2024-02-29 17:49:28 -08:00
William FH
f481cbb32d fireworks[patch]: Fix fireworks bind tools (#18352)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-01 01:18:15 +00:00
Erick Friis
eefb49680f multiple[patch]: fix deprecation versions (#18349) 2024-02-29 16:58:33 -08:00
Erick Friis
11cb42c2c1 core[patch]: deprecation docstring with lib (#18350) 2024-03-01 00:44:13 +00:00
Erick Friis
bce0684327 docs: airbyte deps note (#18243) 2024-02-29 16:02:13 -08:00
Erick Friis
7bbff98dc7 mongodb[patch]: core 0.1.5 dep (#18348) 2024-02-29 15:39:04 -08:00
Erick Friis
4e27e66938 infra: mongodb env vars (#18347) 2024-02-29 15:24:28 -08:00
Jib
72bfc1d3db mongodb[minor]: MongoDB Partner Package -- Porting MongoDBAtlasVectorSearch (#17652)
This PR migrates the existing MongoDBAtlasVectorSearch abstraction from
the `langchain_community` section to the partners package section of the
codebase.
- [x] Run the partner package script as advised in the partner-packages
documentation.
- [x] Add Unit Tests
- [x] Migrate Integration Tests
- [x] Refactor `MongoDBAtlasVectorStore` (autogenerated) to
`MongoDBAtlasVectorSearch`
- [x] ~Remove~ deprecate the old `langchain_community` VectorStore
references.

## Additional Callouts
- Implemented the `delete` method
- Included any missing async function implementations
  - `amax_marginal_relevance_search_by_vector`
  - `adelete` 
- Added new Unit Tests that test for functionality of
`MongoDBVectorSearch` methods
- Removed [`del
res[self._embedding_key]`](e0c81e1cb0/libs/community/langchain_community/vectorstores/mongodb_atlas.py (L218))
in `_similarity_search_with_score` function as it would make the
`maximal_marginal_relevance` function fail otherwise. The `Document`
needs to store the embedding key in metadata to work.

Checklist:

- [x] PR title: Please title your PR "package: description", where
"package" is whichever of langchain, community, core, experimental, etc.
is being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"
- [x] PR message
- [x] Pass lint and test: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified to check that you're
passing lint and testing. See contribution guidelines for more
information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
- [x] Add tests and docs: If you're adding a new integration, please
include
1. Existing tests supplied in docs/docs do not change. Updated
docstrings for new functions like `delete`
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory. (This already exists)

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Steven Silvester <steven.silvester@ieee.org>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-29 23:09:48 +00:00
William De Vena
412148773c Updated partners/fireworks README (#18267)
## PR title
partners: changed the README file for the Fireworks integration in the
libs/partners/fireworks folder

## PR message
Description: Changed the README file of partners/fireworks following the
docs on https://python.langchain.com/docs/integrations/llms/Fireworks

The README includes:

- Brief description
- Installation
- Setting-up instructions (API key, model id, ...)
- Basic usage

Issue: https://github.com/langchain-ai/langchain/issues/17545

Dependencies: None

Twitter handle: None
2024-02-29 14:55:03 -08:00
Kai Kugler
df234fb171 community[patch]: Fixing embedchain document mapping (#18255)
- **Description:** The current embedchain implementation seems to handle
document metadata differently than done in the current implementation of
langchain and a KeyError is thrown. I would love for someone else to
test this...

---------

Co-authored-by: KKUGLER <kai.kugler@mercedes-benz.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Deshraj Yadav <deshraj@gatech.edu>
2024-02-29 14:54:37 -08:00
Erick Friis
040271f33a community[patch]: remove llmlingua extended tests (#18344) 2024-02-29 13:51:29 -08:00
William De Vena
87dca8e477 Updated partners/ibm README (#18268)
## PR title
partners: changed the README file for the IBM Watson AI integration in
the libs/partners/ibm folder.

## PR message
Description: Changed the README file of partners/ibm following the docs
on https://python.langchain.com/docs/integrations/llms/ibm_watsonx

The README includes:

- Brief description
- Installation
- Setting-up instructions (API key, project id, ...)
- Basic usage:
  - Loading the model
  - Direct inference
  - Chain invoking
  - Streaming the model output
  
Issue: https://github.com/langchain-ai/langchain/issues/17545

Dependencies: None

Twitter handle: None

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
2024-02-29 13:29:28 -08:00
Erick Friis
dfd9787388 infra: ci dirs in wrong order (#18340) 2024-02-29 21:13:29 +00:00
Bagatur
9e46535ebc core[patch]: Release 0.1.28 (#18341) 2024-02-29 13:03:13 -08:00
Tomaz Bratanic
5999c4a240 Add support for parameters in neo4j retrieval query (#18310)
Sometimes, you want to use various parameters in the retrieval query of
Neo4j Vector to personalize/customize results. Before, when there were
only predefined chains, it didn't really make sense. Now that it's all
about custom chains and LCEL, it is worth adding since users can inject
any params they wish at query time. Isn't prone to SQL injection-type
attacks since we use parameters and not concatenating strings.
2024-02-29 13:00:54 -08:00
Hasan
15d1b73a00 Add optional output_parser param in create_react_agent (#18320)
**Description:** Add facility to pass the optional output parser to
customize the parsing logic

---------

Co-authored-by: hasan <hasan@m2sys.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-29 12:35:43 -08:00
Bagatur
a6f0506aaf docs: query analysis use case (#17766)
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-29 12:33:49 -08:00
kkdamowang
6782dac420 docs: remove duplicate quote in AzureOpenAIEmbeddings doc (#18315)
- **Description:** Remove duplicate quote in AzureOpenAIEmbeddings doc,
remove trailing spaces.
- **Issue:** No
- **Dependencies:** No
2024-02-29 11:25:50 -08:00
Filip Schouwenaars
4c62362eab Add links to relevant DataCamp code alongs (#18332)
This PR adds links to some more free resources for people to get
acquainted with Langhchain without having to configure their system.

<!-- If no one reviews your PR within a few days, please @-mention one
of baskaryan, efriis, eyurtsev, hwchase17. -->

Co-authored-by: Filip Schouwenaars <filipsch@users.noreply.github.com>
2024-02-29 11:25:01 -08:00
Virat Singh
cd926ac3dd community: Add PolygonFinancials Tool (#18324)
**Description:**
In this PR, I am adding a `PolygonFinancials` tool, which can be used to
get financials data for a given ticker. The financials data is the
fundamental data that is found in income statements, balance sheets, and
cash flow statements of public US companies.

**Twitter**: 
[@virattt](https://twitter.com/virattt)
2024-02-29 10:56:05 -08:00
Leonid Ganeline
d43fa2eab1 docs providers update (#18336)
Formatted pages into a consistent form. Added descriptions and links
when needed.
2024-02-29 10:53:12 -08:00
Erick Friis
68be5a7658 infra: skip ibm api docs (#18335) 2024-02-29 10:16:57 -08:00
Erick Friis
43534a4c08 skip airbyte api docs (#18334) 2024-02-29 09:57:52 -08:00
Bagatur
6a5b084704 docs: update func calling doc (#18300) 2024-02-29 09:45:07 -08:00
Bagatur
68ad3414a2 experimental[patch]: Release 0.0.53 (#18330) 2024-02-29 09:13:21 -08:00
William FH
8af4425abd [Evaluation] Config Fix (#18231) 2024-02-29 00:06:46 -08:00
Averi Kitsch
1b63530274 docs: update Google documentation (#18297)
**Description:** update Google documentation
**Issue:** 
**Dependencies:**
2024-02-29 01:42:44 +00:00
Leonid Ganeline
1d865a7e86 docs: google provider page fixes (#18290)
Several URL-s were broken (in the yesterday PR). Like
[Integrations/platforms/google/Document
Loaders](https://python.langchain.com/docs/integrations/platforms/google#document-loaders)
page, Example link to "Document Loaders / Cloud SQL for PostgreSQL" and
most of the new example links in the Document Loaders, Vectorstores,
Memory sections.

- fixed URL-s (manually verified all example links)
- sorted sections in page to follow the "integrations/components" menu
item order.
- fixed several page titles to fix Navbar item order

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-29 00:45:03 +00:00
William De Vena
0486404a74 langchain_openai[patch]: Invoke callback prior to yielding token (#18269)
## PR title
langchain_openai[patch]: Invoke callback prior to yielding token

## PR message
Description: Invoke callback prior to yielding token in _stream and
_astream methods for langchain_openai.
Issue: https://github.com/langchain-ai/langchain/issues/16913
Dependencies: None
Twitter handle: None
2024-02-29 00:00:08 +00:00
William De Vena
5ee76fccd5 langchain_groq[patch]: Invoke callback prior to yielding token (#18272)
## PR title
langchain_groq[patch]: Invoke callback prior to yielding

## PR message
**Description:**Invoke callback prior to yielding token in _stream and
_astream methods for groq.
Issue: https://github.com/langchain-ai/langchain/issues/16913
Dependencies: None
Twitter handle: None
2024-02-28 23:43:16 +00:00
aditya thomas
eb0c178d75 docs: update to the list of partner packages in the list of providers (#18252)
**Description:** Update to the list of partner packages in the list of
providers
**Issue:** Google & Nvidia had two entries each, both pointing to the
same page
**Dependencies:** None
2024-02-28 15:40:14 -08:00
ccurme
9bf58ec7dd update extraction use-case docs (#17979)
Update extraction use-case docs to showcase and explain all modes of
`create_structured_output_runnable`.
2024-02-28 17:32:04 -05:00
Christophe Bornet
8a81fcd5d3 community: Fix deprecation version of AstraDB VectorStore (#17991) 2024-02-28 17:15:09 -05:00
Stefano Lottini
6d863bed51 partner[minor]: Astra DB clients identify themselves as coming through LangChain package (#18131)
**Description**

This PR sets the "caller identity" of the Astra DB clients used by the
integration plugins (`AstraDBChatMessageHistory`, `AstraDBStore`,
`AstraDBByteStore` and, pending #17767 , `AstraDBVectorStore`). In this
way, the requests to the Astra DB Data API coming from within LangChain
are identified as such (the purpose is anonymous usage stats to best
improve the Astra DB service).
2024-02-28 17:13:22 -05:00
kkdamowang
4899a72b56 docs: remove duplicate word in lcel/streaming (#18249)
- **Description:** Remove duplicate word in lcel/streaming.
- **Issue:** No.
- **Dependencies:**  No.
2024-02-28 21:50:26 +00:00
mackong
2c42f3a955 ollama[patch]: delete suffix slash to avoid redirect (#18260)
- **Description:** see
[ollama](https://github.com/ollama/ollama/blob/main/server/routes.go#L949)'s
route definitions
- **Issue:** N/A
- **Dependencies:** N/A
2024-02-28 16:44:48 -05:00
William De Vena
6b58943917 community[patch]: Invoke callback prior to yielding token (#18288)
## PR title
community[patch]: Invoke callback prior to yielding

PR message
Description: Invoke on_llm_new_token callback prior to yielding token in
_stream and _astream methods.
Issue: https://github.com/langchain-ai/langchain/issues/16913
Dependencies: None
Twitter handle: None
2024-02-28 21:40:53 +00:00
Brace Sproul
ca4f5e2408 ci: Update issue template required checks (#18283) 2024-02-28 13:27:39 -08:00
William De Vena
23722e3653 langchain[patch]: Invoke callback prior to yielding token (#18282)
## PR title
langchain[patch]: Invoke callback prior to yielding

## PR message
Description: Invoke on_llm_new_token callback prior to yielding token in
_stream and _astream methods in langchain/tests/fake_chat_model.
Issue: https://github.com/langchain-ai/langchain/issues/16913
Dependencies: None
Twitter handle: None
2024-02-28 16:15:02 -05:00
Eugene Yurtsev
cd52433ba0 community[minor]: Add SQLDatabaseLoader document loader (#18281)
- **Description:** A generic document loader adapter for SQLAlchemy on
top of LangChain's `SQLDatabaseLoader`.
  - **Needed by:** https://github.com/crate-workbench/langchain/pull/1
  - **Depends on:** GH-16655
  - **Addressed to:** @baskaryan, @cbornet, @eyurtsev

Hi from CrateDB again,

in the same spirit like GH-16243 and GH-16244, this patch breaks out
another commit from https://github.com/crate-workbench/langchain/pull/1,
in order to reduce the size of this patch before submitting it, and to
separate concerns.

To accompany the SQLAlchemy adapter implementation, the patch includes
integration tests for both SQLite and PostgreSQL. Let me know if
corresponding utility resources should be added at different spots.

With kind regards,
Andreas.


### Software Tests

```console
docker compose --file libs/community/tests/integration_tests/document_loaders/docker-compose/postgresql.yml up
```

```console
cd libs/community
pip install psycopg2-binary
pytest -vvv tests/integration_tests -k sqldatabase
```

```
14 passed
```



![image](https://github.com/langchain-ai/langchain/assets/453543/42be233c-eb37-4c76-a830-474276e01436)

---------

Co-authored-by: Andreas Motl <andreas.motl@crate.io>
2024-02-28 21:02:28 +00:00
William De Vena
a37dc83a9e langchain_anthropic[patch]: Invoke callback prior to yielding token (#18274)
## PR title
langchain_anthropic[patch]: Invoke callback prior to yielding

## PR message
- Description: Invoke callback prior to yielding token in _stream and
_astream methods for anthropic.
- Issue: https://github.com/langchain-ai/langchain/issues/16913
- Dependencies: None
- Twitter handle: None
2024-02-28 20:19:22 +00:00
David Ruan
af35e2525a community[minor]: add hugging_face_model document loader (#17323)
- **Description:** add hugging_face_model document loader,
  - **Issue:** NA,
  - **Dependencies:** NA,

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-28 20:05:35 +00:00
Sanjaypranav V M
b9a495e56e community[patch]: added latin-1 decoder to gmail search tool (#18116)
some mails from flipkart , amazon are encoded with other plain text
format so to handle UnicodeDecode error , added exception and latin
decoder

Thank you for contributing to LangChain!

@hwchase17
2024-02-28 19:28:29 +00:00
Nuno Campos
6da08d0f22 Add PNG drawer for Runnable.get_graph() (#18239)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-02-28 11:25:19 -08:00
Nuno Campos
d9fd1194f5 Remove check preventing passing non-declared config keys (#18276)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-02-28 18:28:53 +00:00
William De Vena
7ac74f291e langchain_nvidia_ai_endpoints[patch]: Invoke callback prior to yielding token (#18271)
## PR title
langchain_nvidia_ai_endpoints[patch]: Invoke callback prior to yielding

## PR message
**Description:** Invoke callback prior to yielding token in _stream and
_astream methods for nvidia_ai_endpoints.
**Issue:** https://github.com/langchain-ai/langchain/issues/16913
**Dependencies:** None
2024-02-28 18:10:57 +00:00
Erick Friis
b4f6066a57 docs: airbyte github cookbook (#18275) 2024-02-28 18:04:15 +00:00
Ashley Xu
e3211c2b3d community[patch]: BigQueryVectorSearch JSON type unsupported for metadatas (#18234) 2024-02-28 08:19:53 -08:00
Jack Wotherspoon
92c34d4803 docs: update documentation for Google Cloud database integrations (#18265)
**Description:** Fixing typos and rendering issues for Google Cloud
database integrations.
**Issue:** NA
**Dependencies:** NA
2024-02-28 15:32:43 +00:00
Erick Friis
2e31f1c2f8 infra: api docs folder move (#18223) 2024-02-28 07:10:27 -08:00
Mateusz Szewczyk
db643f6283 ibm[patch]: release 0.1.0 Add possibility to pass ModelInference or Model object to WatsonxLLM class (#18189)
- **Description:** Add possibility to pass ModelInference or Model
object to WatsonxLLM class
- **Dependencies:**
[ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),
  - **Tag maintainer:** : 

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally. 
2024-02-28 07:03:15 -08:00
Averi Kitsch
76eb553084 docs: add documentation for Google Cloud database integrations (#18225)
**Description:** add documentation for Google Cloud database
integrations
**Issue:** NA
**Dependencies:** NA
2024-02-27 21:17:30 -08:00
Erick Friis
d7a77054ed airbyte[patch]: core version 0.1.5 (#18244) 2024-02-27 19:54:43 -08:00
Erick Friis
be8d2ff5f7 airbyte[patch]: init pkg (#18236) 2024-02-27 19:37:53 -08:00
Ayo Ayibiowu
ac1d7d9de8 community[feat]: Adds LLMLingua as a document compressor (#17711)
**Description**: This PR adds support for using the [LLMLingua project
](https://github.com/microsoft/LLMLingua) especially the LongLLMLingua
(Enhancing Large Language Model Inference via Prompt Compression) as a
document compressor / transformer.

The LLMLingua project is an interesting project that can greatly improve
RAG system by compressing prompts and contexts while keeping their
semantic relevance.

**Issue**: https://github.com/microsoft/LLMLingua/issues/31
**Dependencies**: [llmlingua](https://pypi.org/project/llmlingua/)

@baskaryan

---------

Co-authored-by: Ayodeji Ayibiowu <ayodeji.ayibiowu@getinge.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-27 19:23:56 -08:00
Nuno Campos
a99eb3abf4 openai[patch]: Assign message id in ChatOpenAI (#17837) 2024-02-27 17:32:54 -08:00
Isaac Francisco
733367b795 docs: deprecation of OpenAI functions agent, astream_events docstring (#18164)
Co-authored-by: Hershenson, Isaac (Extern) <isaac.hershenson.extern@bayer04.de>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-27 09:14:53 -08:00
Harrison Chase
b0ccaf5917 Harrison/add structured output (#18165) 2024-02-27 08:25:09 -08:00
Bagatur
242af4b5a4 openai[patch], mistral[patch], fireworks[patch]: releases 0.0.8, 0.0.5, 0.0.2 (#18186) 2024-02-27 04:22:24 -08:00
Bagatur
7e66d964c6 core[patch]: Release 0.1.27 (#18159) 2024-02-26 17:27:38 -08:00
Harrison Chase
d7c607ca00 core[minor]: move document compressor base (#17910) 2024-02-26 17:20:50 -08:00
Bagatur
b3f4de38ae mistral[minor]: Function calling and with_structured_output (#18150)
![Screenshot 2024-02-26 at 2 07 06
PM](https://github.com/langchain-ai/langchain/assets/22008038/20cacb47-3b24-45b5-871b-dd169f1acd37)
2024-02-26 16:22:30 -08:00
Bagatur
c53aa5cd37 core[patch]: support JS message serial namespaces (#18151) 2024-02-26 16:19:46 -08:00
Harrison Chase
c673717c2b add optimization notebook (#18155) 2024-02-26 16:09:31 -08:00
Max Jakob
5ab69f907f partners: add Elasticsearch package (#17467)
### Description
This PR moves the Elasticsearch classes to a partners package.

Note that we will not move (and later remove) `ElasticKnnSearch`. It
were previously deprecated.
`ElasticVectorSearch` is going to stay in the community package since it
is used quite a lot still.

Also note that I left the `ElasticsearchTranslator` for self query
untouched because it resides in main `langchain` package.

### Dependencies
There will be another PR that updates the notebooks (potentially pulling
them into the partners package) and templates and removes the classes
from the community package, see
https://github.com/langchain-ai/langchain/pull/17468

#### Open question
How to make the transition smooth for users? Do we move the import
aliases and require people to install `langchain-elasticsearch`? Or do
we remove the import aliases from the `langchain` package all together?
What has worked well for other partner packages?

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-26 23:19:47 +00:00
matt haigh
a4896da2a0 Experimental: Add other threshold types to SemanticChunker (#16807)
**Description**
Adding different threshold types to the semantic chunker. I’ve had much
better and predictable performance when using standard deviations
instead of percentiles.


![image](https://github.com/langchain-ai/langchain/assets/44395485/066e84a8-460e-4da5-9fa1-4ff79a1941c5)

For all the documents I’ve tried, the distribution of distances look
similar to the above: positively skewed normal distribution. All skews
I’ve seen are less than 1 so that explains why standard deviations
perform well, but I’ve included IQR if anyone wants something more
robust.

Also, using the percentile method backwards, you can declare the number
of clusters and use semantic chunking to get an ‘optimal’ splitting.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-26 13:50:48 -08:00
Jaskirat Singh
ce682f5a09 community: vectorstores.kdbai - Added support for when no docs are present (#18103)
- **Description:** By default it expects a list but that's not the case
in corner scenarios when there is no document ingested(use case:
Bootstrap application).
\
Hence added as check, if the instance is panda Dataframe instead of list
then it will procced with return immediately.

- **Issue:** NA
- **Dependencies:** NA
- **Twitter handle:**  jaskiratsingh1

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-26 12:47:06 -08:00
am-kinetica
9b8f6455b1 Langchain vectorstore integration with Kinetica (#18102)
- **Description:** New vectorstore integration with the Kinetica
database
  - **Issue:** 
- **Dependencies:** the Kinetica Python API `pip install
gpudb==7.2.0.1`,
  - **Tag maintainer:** @baskaryan, @hwchase17 
  - **Twitter handle:**

---------

Co-authored-by: Chad Juliano <cjuliano@kinetica.com>
2024-02-26 12:46:48 -08:00
Bagatur
1e8ab83d7b langchain[patch], core[patch], openai[patch], fireworks[minor]: ChatFireworks.with_structured_output (#18078)
<img width="1192" alt="Screenshot 2024-02-24 at 3 39 39 PM"
src="https://github.com/langchain-ai/langchain/assets/22008038/1cf74774-a23f-4b06-9b9b-85dfa2f75b63">
2024-02-26 12:46:39 -08:00
GoodBai
3589a135ef community: make SET allow_experimental_[engine]_index configurabe in vectorstores.clickhouse (#18107)
## Description & Issue
While following the official doc to use clickhouse as a vectorstore, I
found only the default `annoy` index is properly supported. But I want
to try another engine `usearch` for `annoy` is not properly supported on
ARM platforms.
Here is the settings I prefer:

``` python
settings = ClickhouseSettings(
    table="wiki_Ethereum",
    index_type="usearch",  # annoy by default
    index_param=[],
)
```
The above settings do not work for the command `set
allow_experimental_annoy_index=1` is hard-coded.
This PR will make sure the experimental feature follow the `index_type`
which is also consistent with Clickhouse's naming conventions.
2024-02-26 12:39:17 -08:00
Dan Stambler
69344a0661 community: Add Laser Embedding Integration (#18111)
- **Description:** Added Integration with Meta AI's LASER
Language-Agnostic SEntence Representations embedding library, which
supports multilingual embedding for any of the languages listed here:
https://github.com/facebookresearch/flores/blob/main/flores200/README.md#languages-in-flores-200,
including several low resource languages
- **Dependencies:** laser_encoders
2024-02-26 12:16:37 -08:00
Erick Friis
257879e98d infra: api docs setup action location (#18148) 2024-02-26 11:50:21 -08:00
Erick Friis
28cf3aab45 infra: api docs build commit dir (#18147) 2024-02-26 11:47:04 -08:00
Heidi Steen
166f3d8351 Docs: azuresearch.ipynb (in docs/docs/integrations/vectorstores) -- fixed headings and comments (#18135)
This PR updates azuresearch.ipynb with an edit to the introduction
sentence, consistent heading levels, and disambiguation in code
comments.
2024-02-26 11:46:55 -08:00
Luan Fernandes
e867557936 [docs] Update doc-string for buffer_as_messages method in ConversationBufferWindowMemory (#18136)
minor fix stated in #18080
2024-02-26 11:46:43 -08:00
Barun Amalkumar Halder
23fc7c8c90 docs [patch] : fix import to use community path for handler in fiddler notebook (#18140)
**Description:** Update the example fiddler notebook to use community
path, instead of langchain.callback
**Dependencies:** None
**Twitter handle:** @bhalder

Co-authored-by: Barun Halder <barun@fiddler.ai>
2024-02-26 11:41:07 -08:00
Bagatur
767523f364 core[patch], langchain[patch], templates: move openai functions parsers to core (#18060)
![Screenshot 2024-02-23 at 7 48 03
PM](https://github.com/langchain-ai/langchain/assets/22008038/e5540c4d-0020-4ece-869f-ae19db2a1f3f)
2024-02-26 11:12:53 -08:00
Bagatur
96bff0ed5d infra: create api rst for specific pkg (#18144)
Example: create rst for libs/core only
```bash
poetry run python docs/api_reference/create_api_rst.py core
```
2024-02-26 11:04:22 -08:00
Nuno Campos
cd3ab3703b Improve runnable generator error messages (#18142)
h/t @hinthornw 

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-02-26 18:54:25 +00:00
Nuno Campos
62a30efb12 Fix bug with using configurable_fields after configurable_alternatives (#18139)
Closes #17915
2024-02-26 10:27:07 -08:00
Erick Friis
f5cf6975ba docs: anthropic partner package docs (#18109) 2024-02-26 17:51:44 +00:00
Nuno Campos
b1d9ce541d Add BaseMessage.id (#17835)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-02-26 09:27:47 -08:00
Harrison Chase
935aefa8db add run name for query constructor (#18101)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-26 08:17:05 -08:00
Mohammad Mohtashim
719a1cde75 langchain[patch]: Update doc-string for a method in ConversationBufferWindowMemory (#18090)
A minor doc fix stated in #18080
2024-02-26 10:15:02 -05:00
Simon Schmidt
2716d58603 langchain: Import from langchain_core in langchain.smith to avoid deprecation warning (#18129)
Avoids deprecation warning that triggered at import time, e.g. with
`python -c 'import langchain.smith'`


/opt/venv/lib/python3.12/site-packages/langchain/callbacks/__init__.py:37:
LangChainDeprecationWarning: Importing this callback from langchain is
deprecated. Importing it from langchain will no longer be supported as
of langchain==0.2.0. Please import from langchain-community instead:

    `from langchain_community.callbacks import base`.

To install langchain-community run `pip install -U langchain-community`.
2024-02-26 10:14:10 -05:00
rongchenlin
9147a437f1 docs: Fix the bug in MongoDBChatMessageHistory notebook (#18128)
I tried to configure MongoDBChatMessageHistory using the code from the
original documentation to store messages based on the passed session_id
in MongoDB. However, this configuration did not take effect, and the
session id in the database remained as 'test_session'. To resolve this
issue, I found that when configuring MongoDBChatMessageHistory, it is
necessary to set session_id=session_id instead of
session_id=test_session.

Issue: DOC: Ineffective Configuration of MongoDBChatMessageHistory for
Custom session_id Storage

previous code:
```python
chain_with_history = RunnableWithMessageHistory(
    chain,
    lambda session_id: MongoDBChatMessageHistory(
        session_id="test_session",
        connection_string="mongodb://root:Y181491117cLj@123.56.224.232:27017",
        database_name="my_db",
        collection_name="chat_histories",
    ),
    input_messages_key="question",
    history_messages_key="history",
)
config = {"configurable": {"session_id": "mmm"}}
chain_with_history.invoke({"question": "Hi! I'm bob"}, config)
```

![image](https://github.com/langchain-ai/langchain/assets/83388493/c372f785-1ec1-43f5-8d01-b7cc07b806b7)


Modified code:
```python
chain_with_history = RunnableWithMessageHistory(
    chain,
    lambda session_id: MongoDBChatMessageHistory(
        session_id=session_id,   # here is my modify code
        connection_string="mongodb://root:Y181491117cLj@123.56.224.232:27017",
        database_name="my_db",
        collection_name="chat_histories",
    ),
    input_messages_key="question",
    history_messages_key="history",
)
config = {"configurable": {"session_id": "mmm"}}
chain_with_history.invoke({"question": "Hi! I'm bob"}, config)
```

Effect after modification (it works):


![image](https://github.com/langchain-ai/langchain/assets/83388493/5776268c-9098-4da3-bf41-52825be5fafb)
2024-02-26 15:02:56 +00:00
Erick Friis
e3b7779926 docs: api docs for external repos (#17904)
Stacked on google removal PR. Will make google continue to show up in
API docs even from external repo
2024-02-26 06:19:09 +00:00
Erick Friis
248c5b84ee google-genai, google-vertexai: move to langchain-google (#17899)
These packages have moved to
https://github.com/langchain-ai/langchain-google

Left tombstone readmes incase anyone ends up at the "Source Code" link
from old pypi releases. Can keep these around for a few months.
2024-02-25 21:58:05 -08:00
Erick Friis
3b5bdbfee8 anthropic[minor]: package move (#17974) 2024-02-25 21:57:26 -08:00
Christophe Bornet
a2d5fa7649 community[patch]: Fix GenericRequestsWrapper _aget_resp_content must be async (#18065)
There are existing tests in
`libs/community/tests/unit_tests/tools/requests/test_tool.py`
2024-02-25 19:07:07 -08:00
Neli Hateva
a01e8473f8 community[patch]: Fix GraphSparqlQAChain so that it works with Ontotext GraphDB (#15009)
- **Description:** Introduce a new parameter `graph_kwargs` to
`RdfGraph` - parameters used to initialize the `rdflib.Graph` if
`query_endpoint` is set. Also, do not set
`rdflib.graph.DATASET_DEFAULT_GRAPH_ID` as default value for the
`rdflib.Graph` `identifier` if `query_endpoint` is set.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** N/A
2024-02-25 19:05:21 -08:00
Christophe Bornet
4d6cd5b46a astradb[patch]: Use astrapy's upsert_one method in AstraDBStore (#18063)
As `upsert` is deprecated
2024-02-25 19:04:18 -08:00
Danny McAteer
e42110f720 docs: Additional examples for partners/exa README (#18081)
**Description:** Add additional examples for other modules to
partners/exa README
**Issue:** #17545
**Dependencies:** None
**Twitter handle:** @DannyMcAteer8

---------

Co-authored-by: Daniel McAteer <danielmcateer@Daniels-MBP.attlocal.net>
Co-authored-by: Daniel McAteer <danielmcateer@Daniels-MacBook-Pro.local>
2024-02-25 18:53:47 -08:00
dokato
5afb242161 langchain[patch]: Make BooleanOutputParser more robust to non-binary responses (#17810)
- **Description:** I encountered this error when I tried to use
LLMChainFilter. Even if the message slightly differs, like `Not relevant
(NO)` this results in an error. It has been reported already here:
https://github.com/langchain-ai/langchain/issues/. This change hopefully
makes it more robust.
- **Issue:**  #11408 
- **Dependencies:** No
- **Twitter handle:** dokatox
2024-02-25 18:48:33 -08:00
Matt
3b08617a89 docs: update azure search langchain notebook (#18053)
**Description:** Update the azure search notebook to have more
descriptive comments, and an option to choose between OpenAI and
AzureOpenAI Embeddings

---------

Co-authored-by: Matt Gotteiner <[email protected]>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-25 18:48:13 -08:00
kYLe
17ecf6e119 community[patch]: Remove model limitation on Anyscale LLM (#17662)
**Description:** Llama Guard is deprecated from Anyscale public
endpoint.
**Issue:** Change the default model. and remove the limitation of only
use Llama Guard with Anyscale LLMs
Anyscale LLM can also works with all other Chat model hosted on
Anyscale.
Also added `async_client` for Anyscale LLM
2024-02-25 18:21:19 -08:00
Barun Amalkumar Halder
cc69976860 community[minor] : adds callback handler for Fiddler AI (#17708)
**Description:**  Callback handler to integrate fiddler with langchain. 
This PR adds the following -

1. `FiddlerCallbackHandler` implementation into langchain/community
2. Example notebook `fiddler.ipynb` for usage documentation

[Internal Tracker : FDL-14305]

**Issue:** 
NA

**Dependencies:** 
- Installation of langchain-community is unaffected.
- Usage of FiddlerCallbackHandler requires installation of latest
fiddler-client (2.5+)

**Twitter handle:** @fiddlerlabs @behalder

Co-authored-by: Barun Halder <barun@fiddler.ai>
2024-02-25 18:17:03 -08:00
Christophe Bornet
b8b5ce0c8c astradb: Add AstraDBChatMessageHistory to langchain-astradb package (#17732)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-25 18:14:49 -08:00
Maxime Perrin
c06a8732aa community[patch]: fix llama index imports and fields access (#17870)
- **Description:** Fixing outdated imports after v0.10 llama index
update and updating metadata and source text access
  - **Issue:** #17860
  - **Twitter handle:** @maximeperrin_

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
2024-02-25 18:14:23 -08:00
BeatrixCohere
5d2d80a9a8 docs: Add Cohere examples in documentation (#17794)
- Description: Add cohere examples to documentation 
- Issue:N/A
- Dependencies: N/A

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-25 18:10:09 -08:00
Jacob Lee
c9eac3287e docs[patch]: Remove redundant Pinecone import (#18079)
CC @efriis
2024-02-24 19:27:54 -08:00
2jimoo
7fc903464a community: Add document manager and mongo document manager (#17320)
- **Description:** 
    - Add DocumentManager class, which is a nosql record manager. 
- In order to use index and aindex in
libs/langchain/langchain/indexes/_api.py, DocumentManager inherits
RecordManager.
    - Also I added the MongoDB implementation of Document Manager too.
  - **Dependencies:** pymongo, motor
  
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
- **Description:** Add DocumentManager class, which is a no sql record
manager. To use index method and aindex method in indexes._api.py,
Document Manager inherits RecordManager.Add the MongoDB implementation
of Document Manager.
  - **Dependencies:** pymongo, motor

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-23 21:32:52 -05:00
Leonid Ganeline
3f6bf852ea experimental: docstrings update (#18048)
Added missed docstrings. Formatted docsctrings to the consistent format.
2024-02-23 21:24:16 -05:00
kYLe
56b955fc31 community[minor]: Add async_client for Anyscale Chat model (#18050)
Add `async_client` for Anyscale Chat_model
2024-02-23 21:22:54 -05:00
Eugene Yurtsev
68527b809d core[patch]: Runnable with message history to use add_messages (#17958)
This PR updates RunnableWithMessageHistory to use add_messages which
will save on round-trips for any chat
history abstractions that implement the optimization. If the
optimization isn't
implemented, add_messages automatically invokes add_message serially.
2024-02-23 21:19:38 -05:00
Bagatur
1c1bb1152e openai[patch]: refactor with_structured_output (#18052)
- make schema Optional with default val None, since in json_mode you
don't need it if not parsing to pydantic
- change return_type -> include_raw
- expand docstring examples
2024-02-23 17:02:11 -08:00
Erick Friis
e85948d46b docs: fireworks tool calling docs (#18057) 2024-02-24 00:49:11 +00:00
Erick Friis
e566a3077e infra: simplify and fix CI for docs-only changes (#18058)
Current success check will fail on docs-only changes
2024-02-23 16:39:08 -08:00
Erick Friis
1a3383fba1 docs: fireworks fixes (#18056) 2024-02-23 15:58:53 -08:00
Erick Friis
a05fb19f42 openai[patch]: remove numpy dep (#18034) 2024-02-23 21:12:05 +00:00
Danny McAteer
e8be34f8c7 exa[patch]: update readme (#18047) 2024-02-23 21:05:42 +00:00
Yufei (Benny) Chen
ee6a773456 fireworks[patch]: Add Fireworks partner packages (#17694)
---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-23 20:45:47 +00:00
Erick Friis
11cf95e810 docs: recommend lambdas over runnablebranch (#18033) 2024-02-23 11:34:27 -08:00
Erick Friis
9ebbca3695 infra: CI success for partner packages 2 (#18043) 2024-02-23 11:10:39 -08:00
Erick Friis
b948f6da67 infra: CI success for partner packages (#18037)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-02-23 11:00:48 -08:00
Bagatur
22b964f802 community[patch]: Release 0.0.24 (#18038) 2024-02-23 10:49:29 -08:00
Erick Friis
29e0445490 community[patch]: BaseLLM typing in init (#18029) 2024-02-23 17:51:27 +00:00
Nicolò Boschi
4c132b4cc6 community: fix openai streaming throws 'AIMessageChunk' object has no attribute 'text' (#18006)
After upgrading langchain-community to 0.0.22, it's not possible to use
openai from the community package with streaming=True
```
  File "/home/runner/work/ragstack-ai/ragstack-ai/ragstack-e2e-tests/.tox/langchain/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 434, in _generate
    return generate_from_stream(stream_iter)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/runner/work/ragstack-ai/ragstack-ai/ragstack-e2e-tests/.tox/langchain/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 65, in generate_from_stream
    for chunk in stream:
  File "/home/runner/work/ragstack-ai/ragstack-ai/ragstack-e2e-tests/.tox/langchain/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 418, in _stream
    run_manager.on_llm_new_token(chunk.text, chunk=cg_chunk)
                                 ^^^^^^^^^^
AttributeError: 'AIMessageChunk' object has no attribute 'text'
```

Fix regression of https://github.com/langchain-ai/langchain/pull/17907 
**Twitter handle:** @nicoloboschi
2024-02-23 12:12:47 -05:00
Bagatur
9b982b2aba community[patch]: Release 0.0.23 (#18027) 2024-02-23 08:54:31 -08:00
Guangdong Liu
4197efd67a community: Fix SparkLLM error (#18015)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"

- **Description:** fix SparkLLM  error
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
2024-02-23 06:40:29 -08:00
Bagatur
d9e6ca2279 lanchain[patch]: Release 0.1.9 (#17999) 2024-02-22 21:45:30 -08:00
Bagatur
b46d6b04e1 community[patch]: Release 0.0.22 (#17994) 2024-02-22 21:35:04 -08:00
Bagatur
cc0290fdf3 openai[patch]: Release 0.0.7 (#17993) 2024-02-22 21:33:59 -08:00
Erick Friis
a2886c4509 infra: skip codespell ambr (#17992) 2024-02-23 01:26:55 +00:00
Erick Friis
8dda7c32ba infra: ci failure job (#17989) 2024-02-23 01:22:35 +00:00
Bagatur
e045655657 core[patch]: Release 0.1.26 (#17990) 2024-02-22 17:12:51 -08:00
Reid Falconer
0534ba5a7d langchain[patch]: return formatted SPARQL query on demand (#11263)
- **Description:** Added the `return_sparql_query` feature to the
`GraphSparqlQAChain` class, allowing users to get the formatted SPARQL
query along with the chain's result.
  - **Issue:** NA
  - **Dependencies:** None

Note: I've ensured that the PR passes linting and testing by running
make format, make lint, and make test locally.

I have added a test for the integration (which relies on network access)
and I have added an example to the notebook showing its use.
2024-02-22 17:03:26 -08:00
Leo Diegues
b15fccbb99 community[patch]: Skip OpenAIWhisperParser extremely small audio chunks to avoid api error (#11450)
**Description**
This PR addresses a rare issue in `OpenAIWhisperParser` that causes it
to crash when processing an audio file with a duration very close to the
class's chunk size threshold of 20 minutes.

**Issue**
#11449

**Dependencies**
None

**Tag maintainer**
@agola11 @eyurtsev 

**Twitter handle**
leonardodiegues

---------

Co-authored-by: Leonardo Diegues <leonardo.diegues@grupofolha.com.br>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-22 17:02:43 -08:00
Issac
46505742eb Update quickstart.mdx (#17659)
https://github.com/langchain-ai/langchain/issues/17657

Thank you for contributing to LangChain!

Checklist:

- [ ] PR title: Please title your PR "package: description", where
"package" is whichever of langchain, community, core, experimental, etc.
is being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"
- [ ] PR message: **Delete this entire template message** and replace it
with the following bulleted list
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [ ] Pass lint and test: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified to check that you're
passing lint and testing. See contribution guidelines for more
information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
- [ ] Add tests and docs: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-02-22 17:01:40 -08:00
Erick Friis
afc1def49b infra: ci end check, consolidation (#17987)
Consolidates CI checks into check_diffs.yml in order to properly
consolidate them into a single success status
2024-02-22 16:53:10 -08:00
Jorge Villegas
f6a98032e4 docs: langchain-anthropic README updates (#17684)
# PR Message

- **Description:** This PR adds a README file for the Anthropic API in
the `libs/partners` folder of this repository. The README includes:
  - A brief description of the Anthropic package
  - Installation & API instructions
  - Usage examples
  
- **Issue:**
[17545](https://github.com/langchain-ai/langchain/issues/17545)
  
- **Dependencies:** None

Additional notes:
This change only affects the docs package and does not introduce any new
dependencies.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-22 16:22:30 -08:00
Erick Friis
cd806400fc infra: ci end check (#17986) 2024-02-22 16:18:50 -08:00
mackong
9678797625 community[patch]: callback before yield for _stream/_astream (#17907)
- Description: callback on_llm_new_token before yield chunk for
_stream/_astream for some chat models, make all chat models in a
consistent behaviour.
- Issue: N/A
- Dependencies: N/A
2024-02-22 16:15:21 -08:00
Stan Duprey
15e42f1799 docs: Added langchainhub install and fixed typo (#17985)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-22 16:03:40 -08:00
Chad Juliano
50ba3c68bb community[minor]: add Kinetica LLM wrapper (#17879)
**Description:** Initial pull request for Kinetica LLM wrapper
**Issue:** N/A
**Dependencies:** No new dependencies for unit tests. Integration tests
require gpudb, typeguard, and faker
**Twitter handle:** @chad_juliano

Note: There is another pull request for Kinetica vectorstore. Ultimately
we would like to make a partner package but we are starting with a
community contribution.
2024-02-22 16:02:00 -08:00
Matt
6ef12fdfd2 docs: Update Azure Search vector store notebook (#17901)
- **Description:** Update the Azure Search vector store notebook for the
latest version of the SDK

---------

Co-authored-by: Matt Gotteiner <[email protected]>
2024-02-22 15:59:43 -08:00
Averi Kitsch
c05cbf0533 docs: Update Google Provider documentation (#17970)
**Description:** Clean up Google product names and fix document loader
section
**Issue:** NA
**Dependencies:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-22 15:58:52 -08:00
Erick Friis
ed789be8f4 docs, templates: update schema imports to core (#17885)
- chat models, messages
- documents
- agentaction/finish
- baseretriever,document
- stroutputparser
- more messages
- basemessage
- format_document
- baseoutputparser

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-22 15:58:44 -08:00
Leonid Ganeline
971d29e718 docs: robocorpai dosctrings (#17968)
Added missing docstrings

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-02-22 15:55:01 -08:00
Bagatur
b0cfb86c48 langchain[minor]: openai tools structured_output_chain (#17296)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-22 15:42:47 -08:00
Bagatur
b5f8cf9509 core[minor], openai[minor], langchain[patch]: BaseLanguageModel.with_structured_output #17302)
```python
class Foo(BaseModel):
  bar: str

structured_llm = ChatOpenAI().with_structured_output(Foo)
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-22 15:33:34 -08:00
Leonid Ganeline
f685d2f50c docs: partner package list (#17978)
Updated partner package list
2024-02-22 18:23:07 -05:00
Erick Friis
29660f8918 docs: logo (#17972) 2024-02-22 15:20:34 -08:00
Bagatur
9b0b0032c2 community[patch]: fix lint (#17984) 2024-02-22 15:15:27 -08:00
bear
e8633e53c4 docs: Rerun the Tongyi Qwen model to fix incorrect responses. (#17693)
This PR updates the docs of Tongyi Qwen model. 
1. fix the previously incorrect responses of the Tongyi Qwen.
2. rewrite the case with LCEL.
2024-02-22 13:20:04 -08:00
esque
78521caf51 templates: Update README.md - Fixing a typo (#17689)
- **Description:** PR to fix typo in readme
    - **Issue:** typo in readme
    - **Dependencies:** no
    - **Twitter handle:** p_moolrajani
2024-02-22 13:19:37 -08:00
Christophe Bornet
4f88a5130e langchain[patch]: Support langchain-astradb AstraDBVectorStore in self-query retriever (#17728)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-22 13:19:27 -08:00
Muhammad Abdullah Hashmi
9775de46cc community[patch]: Remove subscript for Result type object (#17823)
Resolved 'TypeError: 'type' object is not subscriptable' by removing
subscription of Result type object

Thank you for contributing to LangChain!

- [x] **PR title**: "Langchain: Resolve type error for SQLAlchemy Result
object in QuerySQLDataBaseTool class"

- **Description:** Resolve type error for SQLAlchemy Result object in
QuerySQLDataBaseTool class

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-02-22 13:16:14 -08:00
Mateusz Szewczyk
f6e3aa9770 docs: update IBM watsonx.ai docs (#17932)
- **Description:** Update IBM watsonx.ai docs and add IBM as a provider
docs
- **Dependencies:**
[ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),
  - **Tag maintainer:** : 

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally. 
2024-02-22 10:22:18 -08:00
David Loving
d068e8ea54 community[patch]: compatibility with SQLAlchemy 1.4.x (#17954)
**Description:**
Change type hint on `QuerySQLDataBaseTool` to be compatible with
SQLAlchemy v1.4.x.

**Issue:**
Users locked to `SQLAlchemy < 2.x` are unable to import
`QuerySQLDataBaseTool`.

closes https://github.com/langchain-ai/langchain/issues/17819

**Dependencies:**
None
2024-02-22 13:17:07 -05:00
Erick Friis
e237dcec91 pinecone[patch]: integration test debug (#17960) 2024-02-22 09:11:21 -08:00
kartikTAI
9cf6661dc5 community: use NeuralDB object to initialize NeuralDBVectorStore (#17272)
**Description:** This PR adds an `__init__` method to the
NeuralDBVectorStore class, which takes in a NeuralDB object to
instantiate the state of NeuralDBVectorStore.
**Issue:** N/A
**Dependencies:** N/A
**Twitter handle:** N/A
2024-02-22 12:05:01 -05:00
hongbo.mo
a51a257575 langchain_openai[patch]: fix typos in langchain_openai (#17923)
Just a small typo
2024-02-22 12:03:16 -05:00
Brad Erickson
ecd72d26cf community: Bugfix - correct Ollama API path to avoid HTTP 307 (#17895)
Sets the correct /api/generate path, without ending /, to reduce HTTP
requests.

Reference:

https://github.com/ollama/ollama/blob/efe040f8/docs/api.md#generate-request-streaming

Before:

    DEBUG: Starting new HTTP connection (1): localhost:11434
    DEBUG: http://localhost:11434 "POST /api/generate/ HTTP/1.1" 307 0
    DEBUG: http://localhost:11434 "POST /api/generate HTTP/1.1" 200 None

After:

    DEBUG: Starting new HTTP connection (1): localhost:11434
    DEBUG: http://localhost:11434 "POST /api/generate HTTP/1.1" 200 None
2024-02-22 11:59:55 -05:00
Erick Friis
a53370a060 pinecone[patch], docs: PineconeVectorStore, release 0.0.3 (#17896) 2024-02-22 08:24:08 -08:00
Graden Rea
e5e38e89ce partner: Add groq partner integration and chat model (#17856)
Description: Add a Groq chat model
issue: TODO
Dependencies: groq
Twitter handle: N/A
2024-02-22 07:36:16 -08:00
William FH
da957a22cc Redirect the expression language guides (#17914) 2024-02-22 00:39:57 -08:00
Leonid Ganeline
919b8a387f docs: sorting Examples using ... section (#17588)
The API Reference docs. If the class has a long list of the examples
that works with this class, then the `Examples using` list is [hard to
comprehend](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.openai.OpenAI.html#langchain-community-llms-openai-openai).
If this list is sorted it would be much easier.
- sorting the `Examples using <ClassName>` list
2024-02-21 17:04:23 -08:00
Hasan
7248e98b9e community[patch]: Return PK in similarity search Document (#17561)
Issue: #17390

Co-authored-by: hasan <hasan@m2sys.com>
2024-02-21 17:03:50 -08:00
Raunak
1ec8199c8e community[patch]: Added more functions in NetworkxEntityGraph class (#17624)
- **Description:** 
1. Added add_node(), remove_node(), has_node(), remove_edge(),
has_edge() and get_neighbors() functions in
       NetworkxEntityGraph class.

2. Added the above functions in graph_networkx_qa.ipynb documentation.
2024-02-21 17:02:56 -08:00
William FH
42f158c128 docs: typo (#17710) 2024-02-21 16:53:41 -08:00
Christophe Bornet
0e26b16930 docs: Fix AstraDBVectorStore docstring (#17706) 2024-02-21 16:53:08 -08:00
Neli Hateva
66e1005898 docs: Update Links to resources in the GraphDB QA Chain documentation (#17720)
- **Description:** Update Links to resources in the GraphDB QA Chain
documentation
    - **Issue:** N/A
    - **Dependencies:** N/A
    - **Twitter handle:** N/A
2024-02-21 16:51:32 -08:00
Christophe Bornet
3d91be94b1 community[patch]: Add missing async_astra_db_client param to AstraDBChatMessageHistory (#17742) 2024-02-21 16:46:42 -08:00
Xudong Sun
c524bf31f5 docs: add helpful comments to sparkllm.py (#17774)
Adding helpful comments to sparkllm.py, help users to use ChatSparkLLM
more effectively
2024-02-21 16:42:54 -08:00
Ian
3019a594b7 community[minor]: Add tidb loader support (#17788)
This pull request support loading data from TiDB database with
Langchain.

A simple usage:
```
from  langchain_community.document_loaders import TiDBLoader

CONNECTION_STRING = "mysql+pymysql://root@127.0.0.1:4000/test"

QUERY = "select id, name, description from items;"
loader = TiDBLoader(
    connection_string=CONNECTION_STRING,
    query=QUERY,
    page_content_columns=["name", "description"],
    metadata_columns=["id"],
)
documents = loader.load()
print(documents)
```
2024-02-21 16:42:33 -08:00
Christophe Bornet
815ec74298 docs: Add docstring to AstraDBStore (#17793) 2024-02-21 16:41:47 -08:00
Jacob Lee
375051a64e 👥 Update LangChain people data (#17900)
👥 Update LangChain people data

---------

Co-authored-by: github-actions <github-actions@github.com>
2024-02-21 16:38:28 -08:00
Bagatur
762f49162a docs: fix api build (#17898) 2024-02-21 16:34:37 -08:00
ehude
9e54c227f1 community[patch]: Bug Neo4j VectorStore when having multiple indexes the sort is not working and the store that returned is random (#17396)
Bug fix: when having multiple indexes the sort is not working and the
store that returned is random.
The following small fix resolves the issue.
2024-02-21 16:33:33 -08:00
Michael Feil
242981b8f0 community[minor]: infinity embedding local option (#17671)
**drop-in-replacement for sentence-transformers
inference.**

https://github.com/langchain-ai/langchain/discussions/17670

tldr from the discussion above -> around a 4x-22x speedup over using
SentenceTransformers / huggingface embeddings. For more info:
https://github.com/michaelfeil/infinity (pure-python dependency)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-21 16:33:13 -08:00
Aymen EL Amri
581095b9b5 docs: fix a small typo (#17859)
Just a small typo
2024-02-21 16:31:31 -08:00
Leonid Ganeline
ed0b7c3b72 docs: added community modules descriptions (#17827)
API Reference: Several `community` modules (like
[adapter](https://api.python.langchain.com/en/latest/community_api_reference.html#module-langchain_community.adapters)
module) are missing descriptions. It happens when langchain was split to
the core, langchain and community packages.
- Copied module descriptions from other packages
- Fixed several descriptions to the consistent format.
2024-02-21 16:18:36 -08:00
Christophe Bornet
5019951a5d docs: AstraDB VectorStore docstring (#17834) 2024-02-21 16:16:31 -08:00
Leonid Ganeline
2f2b77602e docs: modules descriptions (#17844)
Several `core` modules do not have descriptions, like the
[agent](https://api.python.langchain.com/en/latest/core_api_reference.html#module-langchain_core.agents)
module.
- Added missed module descriptions. The descriptions are mostly copied
from the `langchain` or `community` package modules.
2024-02-21 15:58:21 -08:00
aditya thomas
d9aa11d589 docs: Change module import path for SQLDatabase in the documentation (#17874)
**Description:** This PR changes the module import path for SQLDatabase
in the documentation
**Issue:** Updates the documentation to reflect the move of integrations
to langchain-community
2024-02-21 15:57:30 -08:00
Christophe Bornet
f8a3b8e83f docs: Update langchain-astradb README with AstraDBStore (#17864) 2024-02-21 15:51:40 -08:00
Rohit Gupta
3acd0c74fc community[patch]: added SCANN index in default search params (#17889)
This will enable users to add data in same collection for index type
SCANN for milvus
2024-02-21 15:47:47 -08:00
Karim Assi
afc1ba0329 community[patch]: add possibility to search by vector in OpenSearchVectorSearch (#17878)
- **Description:** implements the missing `similarity_search_by_vector`
function for `OpenSearchVectorSearch`
- **Issue:** N/A
- **Dependencies:** N/A
2024-02-21 15:44:55 -08:00
Matthew Kwiatkowski
144f59b5fe docs: Fix URL typo in tigris.ipynb (#17894)
- **Description:** The URL in the tigris tutorial was htttps instead of
https, leading to a bad link.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** Speucey
2024-02-21 15:39:38 -08:00
Nathan Voxland (Activeloop)
9ece134d45 docs: Improved deeplake.py init documentation (#17549)
**Description:** 
Updated documentation for DeepLake init method.

Especially the exec_option docs needed improvement, but did a general
cleanup while I was looking at it.

**Issue:** n/a
**Dependencies:** None

---------

Co-authored-by: Nathan Voxland <nathan@voxland.net>
2024-02-21 15:33:00 -08:00
Zachary Toliver
29ee0496b6 community[patch]: Allow override of 'fetch_schema_from_transport' in the GraphQL tool (#17649)
- **Description:** In order to override the bool value of
"fetch_schema_from_transport" in the GraphQLAPIWrapper, a
"fetch_schema_from_transport" value needed to be added to the
"_EXTRA_OPTIONAL_TOOLS" dictionary in load_tools in the "graphql" key.
The parameter "fetch_schema_from_transport" must also be passed in to
the GraphQLAPIWrapper to allow reading of the value when creating the
client. Passing as an optional parameter is probably best to avoid
breaking changes. This change is necessary to support GraphQL instances
that do not support fetching schema, such as TigerGraph. More info here:
[TigerGraph GraphQL Schema
Docs](https://docs.tigergraph.com/graphql/current/schema)
  - **Threads handle:** @zacharytoliver

---------

Co-authored-by: Zachary Toliver <zt10191991@hotmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-21 15:32:43 -08:00
mackong
31891092d8 community[patch]: add missing chunk parameter for _stream/_astream (#17807)
- Description: Add missing chunk parameter for _stream/_astream for some
chat models, make all chat models in a consistent behaviour.
- Issue: N/A
- Dependencies: N/A
2024-02-21 15:32:28 -08:00
ccurme
1b0802babe core: fix .bind when used with RunnableLambda async methods (#17739)
**Description:** Here is a minimal example to illustrate behavior:
```python
from langchain_core.runnables import RunnableLambda

def my_function(*args, **kwargs):
    return 3 + kwargs.get("n", 0)

runnable = RunnableLambda(my_function).bind(n=1)


assert 4 == runnable.invoke({})
assert [4] == list(runnable.stream({}))

assert 4 == await runnable.ainvoke({})
assert [4] == [item async for item in runnable.astream({})]
```
Here, `runnable.invoke({})` and `runnable.stream({})` work fine, but
`runnable.ainvoke({})` raises
```
TypeError: RunnableLambda._ainvoke.<locals>.func() got an unexpected keyword argument 'n'
```
and similarly for `runnable.astream({})`:
```
TypeError: RunnableLambda._atransform.<locals>.func() got an unexpected keyword argument 'n'
```
Here we assume that this behavior is undesired and attempt to fix it.

**Issue:** https://github.com/langchain-ai/langchain/issues/17241,
https://github.com/langchain-ai/langchain/discussions/16446
2024-02-21 15:31:52 -08:00
Gianluca Giudice
f541545c96 Docs: Fix typo (#17733)
- **Description:** fix doc typo
2024-02-21 15:31:43 -08:00
qqubb
41726dfa27 docs: minor grammatical correction. (#17724)
- **Description:** a minor grammatical correction.
2024-02-21 15:31:37 -08:00
volodymyr-memsql
0a9a519a39 community[patch]: Added add_images method to SingleStoreDB vector store (#17871)
In this pull request, we introduce the add_images method to the
SingleStoreDB vector store class, expanding its capabilities to handle
multi-modal embeddings seamlessly. This method facilitates the
incorporation of image data into the vector store by associating each
image's URI with corresponding document content, metadata, and either
pre-generated embeddings or embeddings computed using the embed_image
method of the provided embedding object.

the change includes integration tests, validating the behavior of the
add_images. Additionally, we provide a notebook showcasing the usage of
this new method.

---------

Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
2024-02-21 15:16:32 -08:00
Guangdong Liu
7735721929 docs: update sparkllm intro doc (#17848)
**Description:** update sparkllm intro doc.
**Issue:** None
**Dependencies:** None
**Twitter handle:** None
2024-02-21 15:02:20 -08:00
Leonid Ganeline
6f5b7b55bd docs: API Reference builder bug fix (#17890)
Issue in the API Reference:
If the `Classes` of `Functions` section is empty, it still shown in API
Reference. Here is an
[example](https://api.python.langchain.com/en/latest/core_api_reference.html#module-langchain_core.agents)
where `Functions` table is empty but still presented.
It happens only if this section has only the "private" members (with
names started with '_'). Those members are not shown but the whole
member section (empty) is shown.
2024-02-21 15:59:35 -05:00
Shashank
8381f859b4 community[patch]: Graceful handling of redis errors in RedisCache and AsyncRedisCache (#17171)
- **Description:**
The existing `RedisCache` implementation lacks proper handling for redis
client failures, such as `ConnectionRefusedError`, leading to subsequent
failures in pipeline components like LLM calls. This pull request aims
to improve error handling for redis client issues, ensuring a more
robust and graceful handling of such errors.

  - **Issue:**  Fixes #16866
  - **Dependencies:** No new dependency
  - **Twitter handle:** N/A

Co-authored-by: snsten <>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-21 12:15:19 -05:00
Christophe Bornet
e6311d953d community[patch]: Add AstraDBLoader docstring (#17873) 2024-02-21 11:41:34 -05:00
nbyrneKX
c1bb5fd498 community[patch]: typo in doc-string for kdbai vectorstore (#17811)
community[patch]: typo in doc-string for kdbai vectorstore (#17811)
2024-02-21 10:35:11 -05:00
Jacob Lee
5395c254d5 👥 Update LangChain people data (#17743)
👥 Update LangChain people data

---------

Co-authored-by: github-actions <github-actions@github.com>
2024-02-20 18:30:11 -08:00
Erick Friis
a206d3cf69 docs: remove stale redirects (#17831)
Removes /platform redirects as well as any redirects whose source hasn't
been touched in over 6 months
2024-02-20 17:11:43 -08:00
Christophe Bornet
f59ddcab74 partners/astradb: Use single file instead of module for AstraDBVectorStore (#17644) 2024-02-20 16:58:56 -08:00
Savvas Mantzouranidis
691ff67096 partners/openai: fix depracation errors of pydantic's .dict() function (reopen #16629) (#17404)
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-20 16:57:34 -08:00
Christophe Bornet
bebe401b1a astradb[patch]: Add AstraDBStore to langchain-astradb package (#17789)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-20 16:54:35 -08:00
Bagatur
4e28888d45 core[patch]: Release 0.1.25 (#17833) 2024-02-20 16:43:28 -08:00
Erick Friis
f154cd64fe astradb[patch]: relaxed httpx version constraint (#17826)
relock to newest sdk
2024-02-20 15:45:25 -08:00
Nuno Campos
223e5eff14 Add JSON representation of runnable graph to serialized representation (#17745)
Sent to LangSmith

Thank you for contributing to LangChain!

Checklist:

- [ ] PR title: Please title your PR "package: description", where
"package" is whichever of langchain, community, core, experimental, etc.
is being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"
- [ ] PR message: **Delete this entire template message** and replace it
with the following bulleted list
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [ ] Pass lint and test: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified to check that you're
passing lint and testing. See contribution guidelines for more
information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
- [ ] Add tests and docs: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-02-20 14:51:09 -08:00
Erick Friis
6e854ae371 docs: fix api docs search (#17820) 2024-02-20 13:33:20 -08:00
Guangdong Liu
47b1b7092d community[minor]: Add SparkLLM to community (#17702) 2024-02-20 11:23:47 -08:00
Guangdong Liu
3ba1cb8650 community[minor]: Add SparkLLM Text Embedding Model and SparkLLM introduction (#17573) 2024-02-20 11:22:27 -08:00
Christophe Bornet
33555e5cbc docs: Add typehints in both signature and description of API docs (#17815)
This way we can document APIs in methods signature only where they are
checked by the typing system and we get them also in the param
description without having to duplicate in the docstrings (where they
are unchecked).

Twitter: @cbornet_
2024-02-20 14:21:08 -05:00
Virat Singh
92e52e89ca community: Add PolygonTickerNews Tool (#17808)
Description:
In this PR, I am adding a PolygonTickerNews Tool, which can be used to
get the latest news for a given ticker / stock.

Twitter handle: [@virattt](https://twitter.com/virattt)
2024-02-20 10:15:29 -08:00
Eugene Yurtsev
441160d6b3 Docs: Update contributing documentation (#17557)
This PR adds more details about how to contribute to documentation.
2024-02-20 12:28:15 -05:00
Christophe Bornet
b13e52b6ac community[patch]: Fix AstraDBCache docstrings (#17802) 2024-02-20 11:39:30 -05:00
Eugene Yurtsev
865cabff05 Docs: Add custom chat model documenation (#17595)
This PR adds documentation about how to implement a custom chat model.
2024-02-19 22:03:49 -05:00
Nuno Campos
07ee41d284 Cache calls to create_model for get_input_schema and get_output_schema (#17755)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-02-19 13:26:42 -08:00
Bagatur
5ed16adbde experimental[patch]: Release 0.0.52 (#17763) 2024-02-19 13:12:22 -08:00
Bagatur
da7bca2178 langchain[patch]: bump community to 0.0.21 (#17754) 2024-02-19 12:58:32 -08:00
Bagatur
441448372d langchain[patch]: Release 0.1.8 (#17751) 2024-02-19 11:27:37 -08:00
Bagatur
a9d3c100a2 infra: PR template nits (#17752) 2024-02-19 11:22:31 -08:00
Bagatur
ad285ca15c community[patch]: Release 0.0.21 (#17750) 2024-02-19 11:13:33 -08:00
Karim Lalani
ea61302f71 community[patch]: bug fix - add empty metadata when metadata not provided (#17669)
Code fix to include empty medata dictionary to aadd_texts if metadata is
not provided.
2024-02-19 10:54:52 -08:00
CogniJT
919ebcc596 community[minor]: CogniSwitch Agent Toolkit for LangChain (#17312)
**Description**: CogniSwitch focusses on making GenAI usage more
reliable. It abstracts out the complexity & decision making required for
tuning processing, storage & retrieval. Using simple APIs documents /
URLs can be processed into a Knowledge Graph that can then be used to
answer questions.

**Dependencies**: No dependencies. Just network calls & API key required
**Tag maintainer**: @hwchase17
**Twitter handle**: https://github.com/CogniSwitch
**Documentation**: Please check
`docs/docs/integrations/toolkits/cogniswitch.ipynb`
**Tests**: The usual tool & toolkits tests using `test_imports.py`

PR has passed linting and testing before this submission.

---------

Co-authored-by: Saicharan Sridhara <145636106+saiCogniswitch@users.noreply.github.com>
2024-02-19 10:54:13 -08:00
Christophe Bornet
6275d8b1bf docs: Fix AstraDBChatMessageHistory docstrings (#17740) 2024-02-19 10:47:38 -08:00
Pranav Agarwal
86ae48b781 experimental[minor]: Amazon Personalize support (#17436)
## Amazon Personalize support on Langchain

This PR is a successor to this PR -
https://github.com/langchain-ai/langchain/pull/13216

This PR introduces an integration with [Amazon
Personalize](https://aws.amazon.com/personalize/) to help you to
retrieve recommendations and use them in your natural language
applications. This integration provides two new components:

1. An `AmazonPersonalize` client, that provides a wrapper around the
Amazon Personalize API.
2. An `AmazonPersonalizeChain`, that provides a chain to pull in
recommendations using the client, and then generating the response in
natural language.

We have added this to langchain_experimental since there was feedback
from the previous PR about having this support in experimental rather
than the core or community extensions.

Here is some sample code to explain the usage.

```python

from langchain_experimental.recommenders import AmazonPersonalize
from langchain_experimental.recommenders import AmazonPersonalizeChain
from langchain.llms.bedrock import Bedrock

recommender_arn = "<insert_arn>"

client=AmazonPersonalize(
    credentials_profile_name="default",
    region_name="us-west-2",
    recommender_arn=recommender_arn
)
bedrock_llm = Bedrock(
    model_id="anthropic.claude-v2", 
    region_name="us-west-2"
)

chain = AmazonPersonalizeChain.from_llm(
    llm=bedrock_llm, 
    client=client
)
response = chain({'user_id': '1'})
```


Reviewer: @3coins
2024-02-19 10:36:37 -08:00
Aymeric Roucher
0d294760e7 Community: Fuse HuggingFace Endpoint-related classes into one (#17254)
## Description
Fuse HuggingFace Endpoint-related classes into one:
-
[HuggingFaceHub](5ceaf784f3/libs/community/langchain_community/llms/huggingface_hub.py)
-
[HuggingFaceTextGenInference](5ceaf784f3/libs/community/langchain_community/llms/huggingface_text_gen_inference.py)
- and
[HuggingFaceEndpoint](5ceaf784f3/libs/community/langchain_community/llms/huggingface_endpoint.py)

Are fused into
- HuggingFaceEndpoint

## Issue
The deduplication of classes was creating a lack of clarity, and
additional effort to develop classes leads to issues like [this
hack](5ceaf784f3/libs/community/langchain_community/llms/huggingface_endpoint.py (L159)).

## Dependancies

None, this removes dependancies.

## Twitter handle

If you want to post about this: @AymericRoucher

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-19 10:33:15 -08:00
Bagatur
8009be862e core[patch]: Release 0.1.24 (#17744) 2024-02-19 10:27:26 -08:00
Raghav Dixit
6c18f73ca5 community[patch]: LanceDB integration improvements/fixes (#16173)
Hi, I'm from the LanceDB team.

Improves LanceDB integration by making it easier to use - now you aren't
required to create tables manually and pass them in the constructor,
although that is still backward compatible.

Bug fix - pandas was being used even though it's not a dependency for
LanceDB or langchain

PS - this issue was raised a few months ago but lost traction. It is a
feature improvement for our users kindly review this , Thanks !
2024-02-19 10:22:02 -08:00
Christophe Bornet
e92e96193f community[minor]: Add async methods to the AstraDB BaseStore (#16872)
---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-02-19 10:11:49 -08:00
Mohammad Mohtashim
43dc5d3416 community[patch]: OpenLLM Client Fixes + Added Timeout Parameter (#17478)
- OpenLLM was using outdated method to get the final text output from
openllm client invocation which was raising the error. Therefore
corrected that.
- OpenLLM `_identifying_params` was getting the openllm's client
configuration using outdated attributes which was raising error.
- Updated the docstring for OpenLLM.
- Added timeout parameter to be passed to underlying openllm client.
2024-02-19 10:09:11 -08:00
Leonid Ganeline
1d2aa19aee docs: Fix bug that caused the word "Beta" to appear twice in doc-strings (#17704)
The current issue:
Several beta descriptions in the API Reference are duplicated. For
example:
`[Beta] Get a context value.[Beta] Get a context value.` for the
[ContextGet
class](https://api.python.langchain.com/en/latest/core_api_reference.html#module-langchain_core.beta)
description.

NOTE: I've tested it only with a new ut! I cannot build API Reference
locally :(
This PR related to #17615
2024-02-18 21:38:37 -05:00
Guangdong Liu
73edf17b4e community[minor]: Add Apache Doris as vector store (#17527)
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-18 12:05:58 -07:00
Bagatur
a058c8812d community[patch]: add VoyageEmbeddings truncation (#17638) 2024-02-18 10:21:21 -07:00
Eugene Yurtsev
d7c26c89b2 ci: rename makefile -> Makefile in docker (#17648)
Minor file rename.
2024-02-16 16:59:18 -05:00
Mohammad Mohtashim
8d4547ae97 [Langchain_community]: Corrected the imports to make them compatible with Sqlachemy <2.0 (#17653)
- Small Change in Imports in sql_database module to make it work with
Sqlachemy <2.0
 - This was identified in the following issue: #17616
2024-02-16 16:59:08 -05:00
Christophe Bornet
75465a2a3c partners/astradb: Add dotenv to langchain-astradb integration tests (#17629) 2024-02-16 11:48:30 -05:00
Stefano Lottini
2a239710a0 docs: update astradb imports to in docs/sample notebook to import from partner package (#17627)
This PR replaces the imports of the Astra DB vector store with the
newly-released partner package, in compliance with the deprecation
notice now attached to the community "legacy" store.
2024-02-16 11:30:13 -05:00
Christophe Bornet
19ebc7418e community: Use _AstraDBCollectionEnvironment in AstraDB VectorStore (community) (#17635)
Another PR will be done for the langchain-astradb package.

Note: for future PRs, devs will be done in the partner package only. This one is just to align with the rest of the components in the community package and it fixes a bunch of issues.
2024-02-16 11:28:16 -05:00
ccurme
0b33abc8b1 docs: update documentation for RunnableWithMessageHistory (#17602)
- **Description:** Update documentation for RunnableWithMessageHistory
- **Issue:** https://github.com/langchain-ai/langchain/issues/16642

I don't have access to an Anthropic API key so I updated things to use
OpenAI. Let me know if you'd prefer another provider.
2024-02-16 11:25:49 -05:00
Mateusz Szewczyk
e25b722ea9 watsonx[patch]: Invoke callback prior to yielding token when streaming (#17625)
**Description**: Invoke callback prior to yielding token in stream
method for watsonx.
 **Issue**: https://github.com/langchain-ai/langchain/issues/16913
2024-02-16 09:45:12 -05:00
Nejc Habjan
b4fa847a90 community[minor]: add exclude parameter to DirectoryLoader (#17316)
- **Description:** adds an `exclude` parameter to the DirectoryLoader
class, based on similar behavior in GenericLoader
- **Issue:** discussed in
https://github.com/langchain-ai/langchain/discussions/9059 and I think
in some other issues that I cannot find at the moment 🙇
  - **Dependencies:** None
  - **Twitter handle:** don't have one sorry! Just https://github/nejch

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-16 09:42:42 -05:00
Bagatur
8f14234afb infra: ignore flakey lua test (#17618) 2024-02-16 05:02:58 -07:00
Krista Pratico
bf8e3c6dd1 community[patch]: add fixes for AzureSearch after update to stable azure-search-documents library (#17599)
- **Description:** Addresses the bugs described in linked issue where an
import was erroneously removed and the rename of a keyword argument was
missed when migrating from beta --> stable of the azure-search-documents
package
- **Issue:** https://github.com/langchain-ai/langchain/issues/17598
- **Dependencies:** N/A
- **Twitter handle:** N/A
2024-02-15 22:23:52 -08:00
William FH
64743dea14 core[patch], community[patch], langchain[patch], experimental[patch], robocorp[patch]: bump LangSmith 0.1.* (#17567) 2024-02-15 23:17:59 -07:00
morgana
9d7ca7df6e community[patch]: update copy of metadata in rockset vectorstore integration (#17612)
- **Description:** This fixes an issue with working with RecordManager.
RecordManager was generating new hashes on documents because `add_texts`
was modifying the metadata directly. Additionally moved some tests to
unit tests since that was a more appropriate home.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** `@_morgan_adams_`
2024-02-15 23:13:40 -07:00
Erick Friis
c8d96f30bd exa[patch]: fix lint (#17610) 2024-02-15 20:45:16 -08:00
Erick Friis
8f5c70769d astradb[patch]: fix core dep 3 (#17617) 2024-02-15 20:42:30 -08:00
Kartheek Yakkala
44db4412c0 ci[minor] : Added graphdb in docker compose for integration tests (#17510)
This PR adds graphdb to the docker compose so it can be used in integration tests.

Co-authored-by: KARTHEEK YAKKALA <kartheekyakkala.se@gmail.com>
2024-02-15 23:03:22 -05:00
Leonid Ganeline
0835ebad70 docs: Fix bug that caused the word "Deprecated" to appear twice in doc-strings (#17615)
The current issue:
Most of the deprecation descriptions are duplicated. For example:
`[Deprecated] Chat Agent.[Deprecated] Chat Agent.` for the [ChatAgent
class](https://api.python.langchain.com/en/latest/langchain_api_reference.html#classes)
description.

NOTE: I've tested it only with new ut! I cannot build API Reference
locally :(
2024-02-15 22:52:26 -05:00
Kevin
88af4fd514 docs: quickstart example returns 404 (#17609)
**Description:** 
Appears a legacy URL in the quickstart returns a 404. Updated to use
Langchain homepage and ran through tutorial to confirm results.
2024-02-15 16:50:41 -08:00
Erick Friis
aa31025dd7 astradb[patch]: fix core dep 2 (#17608) 2024-02-15 16:33:02 -08:00
Erick Friis
cc562e7c58 astradb[patch]: fix core dep (#17606) 2024-02-15 16:09:38 -08:00
Stefano Lottini
5240ecab99 astradb: bootstrapping Astra DB as Partner Package (#16875)
**Description:** This PR introduces a new "Astra DB" Partner Package.

So far only the vector store class is _duplicated_ there, all others
following once this is validated and established.

Along with the move to separate package, incidentally, the class name
will change `AstraDB` => `AstraDBVectorStore`.

The strategy has been to duplicate the module (with prospected removal
from community at LangChain 0.2). Until then, the code will be kept in
sync with minimal, known differences (there is a makefile target to
automate drift control. Out of convenience with this check, the
community package has a class `AstraDBVectorStore` aliased to `AstraDB`
at the end of the module).

With this PR several bugfixes and improvement come to the vector store,
as well as a reshuffling of the doc pages/notebooks (Astra and
Cassandra) to align with the move to a separate package.

**Dependencies:** A brand new pyproject.toml in the new package, no
changes otherwise.

**Twitter handle:** `@rsprrs`

---------

Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-15 15:50:59 -08:00
Erick Friis
f6f0ca1bae docs: ai21 sidebars (#17600) 2024-02-15 14:43:48 -08:00
Erick Friis
6cc6faa00e ai21: init package (#17592)
Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: etang <etang@ai21.com>
Co-authored-by: asafgardin <147075902+asafgardin@users.noreply.github.com>
2024-02-15 12:25:05 -08:00
Moshe Berchansky
20a56fe0a2 community[minor]: Add QuantizedEmbedders (#17391)
**Description:** 
* adding Quantized embedders using optimum-intel and
intel-extension-for-pytorch.
* added mdx documentation and example notebooks 
* added embedding import testing.

**Dependencies:** 
optimum = {extras = ["neural-compressor"], version = "^1.14.0", optional
= true}
intel_extension_for_pytorch = {version = "^2.2.0", optional = true}

Dependencies have been added to pyproject.toml for the community lib.  

**Twitter handle:** @peter_izsak

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-15 11:01:24 -08:00
Amir Karbasi
bccc9241ea community[patch]: Resolve KuzuQAChain API Changes (#16885)
- **Description:** Updates to the Kuzu API had broken this
functionality. These updates resolve those issues and add a new test to
demonstrate the updates.
- **Issue:** #11874
- **Dependencies:** No new dependencies
- **Twitter handle:** @amirk08


Test results:
```
tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_query_no_params PASSED                                   [ 33%]
tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_query_params PASSED                                      [ 66%]
tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_refresh_schema PASSED                                    [100%]

=================================================== slowest 5 durations =================================================== 
0.53s call     tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_refresh_schema
0.34s call     tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_query_no_params
0.28s call     tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_query_params
0.03s teardown tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_refresh_schema
0.02s teardown tests/integration_tests/graphs/test_kuzu.py::TestKuzu::test_query_params
==================================================== 3 passed in 1.27s ==================================================== 
```
2024-02-15 10:18:37 -08:00
Rafail Giavrimis
a84a3add25 Community[patch]: Adjusted import to be compatible with SQLAlchemy<2 (#17520)
- **Description:** Adjusts an import to directly import `Result` from
`sqlalchemy.engine`.
- **Issue:** #17519 
- **Dependencies:** N/A
- **Twitter handle:** @grafail
2024-02-15 11:12:13 -05:00
Zachary Toliver
6746adf363 community[patch]: pass bool value for fetch_schema_from_transport in GraphQLAPIWrapper (#17552)
- **Description:** Allow a bool value to be passed to
fetch_schema_from_transport since not all GraphQL instances support this
feature, such as TigerGraph.
- **Threads:** @zacharytoliver

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-15 09:54:04 -05:00
Christophe Bornet
789cd5198d community[patch]: Use astrapy built-in pagination prefetch in AstraDBLoader (#17569) 2024-02-15 09:52:56 -05:00
Christophe Bornet
387cacb881 community[minor]: Add async methods to AstraDBChatMessageHistory (#17572) 2024-02-15 09:48:42 -05:00
Christophe Bornet
ff1f985a2a community: Fix some mypy types in cassandra doc loader (#17570)
Thank you!
2024-02-15 09:45:22 -05:00
Mo Latif
f3e4a0e27f langchain[patch]: Update Chain prep_inputs docstring (#17575)
**Description**: @eyurtsev Following up on #16644 to fix the docstring,
because `prep_inputs` is not longer doing any validation.
2024-02-15 09:44:35 -05:00
William FH
53b8c86309 fix dataset link (#17565) 2024-02-14 23:18:07 -08:00
William FH
fc1617c44f Update contact link (#17563) 2024-02-14 22:37:32 -08:00
Eugene Yurtsev
79119b4345 Docs: Add repository structure to contributors guide (#17553)
Adding another high level overview page to the contributors guide
2024-02-14 23:20:45 -05:00
Christophe Bornet
ca2d4078f3 community: Add async methods to AstraDBCache (#17415)
Adds async methods to AstraDBCache
2024-02-14 23:10:08 -05:00
Eugene Yurtsev
e438fe6be9 Docs: Contributing changes (#17551)
A few minor changes for contribution:

1) Updating link to say "Contributing" rather than "Developer's guide"
2) Minor changes after going through the contributing documentation
page.
2024-02-14 17:55:09 -05:00
Jan Cap
7ae3ce60d2 community[patch]: Fix pwd import that is not available on windows (#17532)
- **Description:** Resolving problem in
`langchain_community\document_loaders\pebblo.py` with `import pwd`.
`pwd` is not available on windows. import moved to try catch block
  - **Issue:** #17514
2024-02-14 13:45:10 -08:00
nvpranak
91bcc9c5c9 community[minor]: Nemo embeddings(#16206)
This PR is adding support for NVIDIA NeMo embeddings issue #16095.

---------

Co-authored-by: Praveen Nakshatrala <pnakshatrala@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-14 13:25:42 -08:00
Mattt394
7c6009b76f experimental[patch]: Fixed typos in SmartLLMChain ideation and critique prompts (#11507)
Noticed and fixed a few typos in the SmartLLMChain default ideation and
critique prompts

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-14 13:20:10 -08:00
Erick Friis
86d3e42853 core[minor]: add name to basemessage (#17539)
Adds an optional name param to our base message to support passing names
into LLMs.

OpenAI supports having a name on anything except tool message now
(system, ai, user/human).
2024-02-14 12:21:59 -08:00
Mateusz Szewczyk
916332ef5b ibm: added partners package langchain_ibm, added llm (#16512)
- **Description:** Added `langchain_ibm` as an langchain partners
package of IBM [watsonx.ai](https://www.ibm.com/products/watsonx-ai) LLM
provider (`WatsonxLLM`)
- **Dependencies:**
[ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),
  - **Tag maintainer:** : 
---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-14 12:12:19 -08:00
Shawn
f6d3a3546f community[patch]: document_loaders: modified athena key logic to handle s3 uris without a prefix (#17526)
https://github.com/langchain-ai/langchain/issues/17525

### Example Code

```python
from langchain_community.document_loaders.athena import AthenaLoader

database_name = "database"
s3_output_path = "s3://bucket-no-prefix"
query="""SELECT 
  CAST(extract(hour FROM current_timestamp) AS INTEGER) AS current_hour,
  CAST(extract(minute FROM current_timestamp) AS INTEGER) AS current_minute,
  CAST(extract(second FROM current_timestamp) AS INTEGER) AS current_second;
"""
profile_name = "AdministratorAccess"

loader = AthenaLoader(
    query=query,
    database=database_name,
    s3_output_uri=s3_output_path,
    profile_name=profile_name,
)

documents = loader.load()
print(documents)
```



### Error Message and Stack Trace (if applicable)

NoSuchKey: An error occurred (NoSuchKey) when calling the GetObject
operation: The specified key does not exist

### Description

Athena Loader errors when result s3 bucket uri has no prefix. The Loader
instance call results in a "NoSuchKey: An error occurred (NoSuchKey)
when calling the GetObject operation: The specified key does not exist."
error.

If s3_output_path contains a prefix like:

```python
s3_output_path = "s3://bucket-with-prefix/prefix"
```

Execution works without an error.

## Suggested solution

Modify:

```python
key = "/".join(tokens[1:]) + "/" + query_execution_id + ".csv"
```

to

```python
key = "/".join(tokens[1:]) + ("/" if tokens[1:] else "") + query_execution_id + ".csv"
```


9e8a3fc4ff/libs/community/langchain_community/document_loaders/athena.py (L128)


### System Info


System Information
------------------
> OS:  Darwin
> OS Version: Darwin Kernel Version 22.6.0: Fri Sep 15 13:41:30 PDT
2023; root:xnu-8796.141.3.700.8~1/RELEASE_ARM64_T8103
> Python Version:  3.9.9 (main, Jan  9 2023, 11:42:03) 
[Clang 14.0.0 (clang-1400.0.29.102)]

Package Information
-------------------
> langchain_core: 0.1.23
> langchain: 0.1.7
> langchain_community: 0.0.20
> langsmith: 0.0.87
> langchain_openai: 0.0.6
> langchainhub: 0.1.14

Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:

> langgraph
> langserve

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-14 11:48:31 -08:00
wulixuan
c776cfc599 community[minor]: integrate with model Yuan2.0 (#15411)
1. integrate with
[`Yuan2.0`](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/README-EN.md)
2. update `langchain.llms`
3. add a new doc for [Yuan2.0
integration](docs/docs/integrations/llms/yuan2.ipynb)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-14 11:46:20 -08:00
Philippe PRADOS
d07db457fc community[patch]: Fix SQLAlchemyMd5Cache race condition (#16279)
If the SQLAlchemyMd5Cache is shared among multiple processes, it is
possible to encounter a race condition during the cache update.

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-14 11:45:28 -08:00
Alex Peplowski
70c296ae96 community[patch]: Expose Anthropic Retry Logic (#17069)
**Description:**

Expose Anthropic's retry logic, so that `max_retries` can be configured
via langchain. Anthropic's retry logic is implemented in their Python
SDK here:
https://github.com/anthropics/anthropic-sdk-python?tab=readme-ov-file#retries

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-14 11:44:28 -08:00
DanisJiang
de9a6cdf16 experimental[patch]: Enhance protection against arbitrary code execution in PALChain (#17091)
- **Description:** Block some ways to trigger arbitrary code execution
bug in PALChain.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-14 11:44:07 -08:00
Lyndsey
8562a1e7d4 community[patch]: support query filters for NotionDBLoader (#17217)
- **Description:** Support filtering databases in the use case where
devs do not want to query ALL entries within a DB,
- **Issue:** N/A,
- **Dependencies:** N/A,
- **Twitter handle:** I don't have Twitter but feel free to tag my
Github!

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-14 11:43:41 -08:00
volodymyr-memsql
e36bc379f2 community[patch]: Add vector index support to SingleStoreDB VectorStore (#17308)
This pull request introduces support for various Approximate Nearest
Neighbor (ANN) vector index algorithms in the VectorStore class,
starting from version 8.5 of SingleStore DB. Leveraging this enhancement
enables users to harness the power of vector indexing, significantly
boosting search speed, particularly when handling large sets of vectors.

---------

Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-14 11:43:12 -08:00
Kate Silverstein
0bc4a9b3fc community[minor]: Adds Llamafile as an LLM (#17431)
* **Description:** Adds a simple LLM implementation for interacting with
[llamafile](https://github.com/Mozilla-Ocho/llamafile)-based models.
* **Dependencies:** N/A
* **Issue:** N/A

**Detail**
[llamafile](https://github.com/Mozilla-Ocho/llamafile) lets you run LLMs
locally from a single file on most computers without installing any
dependencies.

To use the llamafile LLM implementation, the user needs to:

1. Download a llamafile e.g.
https://huggingface.co/jartine/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/TinyLlama-1.1B-Chat-v1.0.Q5_K_M.llamafile?download=true
2. Make the file executable.
3. Run the llamafile in 'server mode'. (All llamafiles come packaged
with a lightweight server; by default, the server listens at
`http://localhost:8080`.)


```bash
wget https://url/of/model.llamafile
chmod +x model.llamafile
./model.llamafile --server --nobrowser
```

Now, the user can invoke the LLM via the LangChain client:

```python
from langchain_community.llms.llamafile import Llamafile

llm = Llamafile()

llm.invoke("Tell me a joke.")
```
2024-02-14 11:15:24 -08:00
Rakib Hosen
5ce1827d31 community[patch]: fix import in language parser (#17538)
- **Description:** Resolving import error in language_parser.py during
"from langchain.langchain.text_splitter import Language - **Issue:** the
issue #17536
- **Dependencies:** NO
- **Twitter handle:** @iRakibHosen

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-14 11:11:23 -08:00
Raunak
685d62b032 community[patch]: Added functions in NetworkxEntityGraph class (#17535)
- **Description:** 
1. Added _clear_edges()_ and _get_number_of_nodes()_ functions in
NetworkxEntityGraph class.
2. Added the above two function in graph_networkx_qa.ipynb
documentation.
2024-02-14 11:02:24 -08:00
Erick Friis
bfaa8c3048 anthropic[patch]: de-beta anthropic messages, release 0.0.2 (#17540) 2024-02-14 10:31:45 -08:00
Erick Friis
a99c667c22 partners: version constraints (#17492)
Core should be ^0.1 by default

Careful about 0.x.y and 0.0.z packages
2024-02-14 08:57:46 -08:00
Erick Friis
d7418acbe1 nomic[patch]: release 0.0.2, dimensionality (#17534)
- nomic[patch]: release 0.0.2
- x
2024-02-14 08:38:07 -08:00
Bagatur
9e8a3fc4ff infra: rm @ from pr template (#17507) 2024-02-13 21:29:22 -08:00
shibuiwilliam
c502736841 infra: add test for ensemble retriever to ensure multiple retrievers (#8401)
Add tests to ensemble retriever to ensure it works with combination of
multiple retrievers

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-13 21:22:03 -08:00
Qihui Xie
5738143d4b add mongodb_store (#13801)
# Add MongoDB storage
  - **Description:** 
  Add MongoDB Storage as an option for large doc store. 

Example usage: 
```Python
# Instantiate the MongodbStore with a MongoDB connection
from langchain.storage import MongodbStore

mongo_conn_str = "mongodb://localhost:27017/"
mongodb_store = MongodbStore(mongo_conn_str, db_name="test-db",
                                collection_name="test-collection")

# Set values for keys
doc1 = Document(page_content='test1')
doc2 = Document(page_content='test2')
mongodb_store.mset([("key1", doc1), ("key2", doc2)])

# Get values for keys
values = mongodb_store.mget(["key1", "key2"])
# [doc1, doc2]

# Iterate over keys
for key in mongodb_store.yield_keys():
    print(key)

# Delete keys
mongodb_store.mdelete(["key1", "key2"])
 ```

  - **Dependencies:**
  Use `mongomock` for integration test.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-13 22:33:22 -05:00
Mo Latif
50b48a8e6a langchain[patch]: Invoke chain prep_inputs and prep_outputs inside try block to catch validation errors (#16644)
- **Description:** Callback manager can't catch chain input or output
validation errors because `prepare_input` and `prepare_output` are not
part of the try/raise logic, this PR fixes that logic.
 
  - **Issue:** #15954
2024-02-13 22:23:11 -05:00
Christophe Bornet
a8f530bc4d Add async methods to CacheBackedEmbeddings (#16873)
Adds async methods to CacheBackedEmbeddings
2024-02-13 22:16:27 -05:00
Bagatur
dd68a8716e infra: update rtd yaml (#17502) 2024-02-13 18:16:44 -08:00
Bagatur
1aeb52caac infra: merge in master during api docs build (#17494) 2024-02-13 18:08:07 -08:00
Bagatur
54373fb384 infra: add api docs build GHA (#17493) 2024-02-13 16:46:58 -08:00
Bagatur
50de7a31f0 langchain[patch]: structured output chain nits (#17291) 2024-02-13 16:45:29 -08:00
Nat Noordanus
8a3b74fe1f community[patch]: Fix pydantic ForwardRef error in BedrockBase (#17416)
- **Description:** Fixes a type annotation issue in the definition of
BedrockBase. This issue was that the annotation for the `config`
attribute includes a ForwardRef to `botocore.client.Config` which is
only imported when `TYPE_CHECKING`. This can cause pydantic to raise an
error like `pydantic.errors.ConfigError: field "config" not yet prepared
so type is still a ForwardRef, ...`.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** `@__nat_n__`
2024-02-13 16:15:55 -08:00
Bagatur
2c076bebc9 docs: fix self query redirect (#17490) 2024-02-13 15:44:56 -08:00
Ashley Xu
f746a73e26 Add the BQ job usage tracking from LangChain (#17123)
- **Description:**
Add the BQ job usage tracking from LangChain

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-13 14:47:57 -08:00
Bagatur
5dca107621 docs: update providers (#17488) 2024-02-13 14:00:15 -08:00
JongRok BAEK
8d6cc90fc5 langchain.core : Use shallow copy for schema manipulation in JsonOutputParser.get_format_instructions (#17162)
- **Description :**  

Fix: Use shallow copy for schema manipulation in get_format_instructions

Prevents side effects on the original schema object by using a
dictionary comprehension for a safer and more controlled manipulation of
schema key-value pairs, enhancing code reliability.

  - **Issue:**  #17161 
  - **Dependencies:** None
  -  **Twitter handle:** None
2024-02-13 13:30:53 -08:00
Rave Harpaz
90f55e6bd1 Documentation/add update documentation for oci (#17473)
Thank you for contributing to LangChain!

Checklist:

- **PR title**: docs: add & update docs for Oracle Cloud Infrastructure
(OCI) integrations

- **Description**: adding and updating documentation for two
integrations - OCI Generative AI & OCI Data Science
(1) adding integration page for OCI Generative AI embeddings (@baskaryan
request,
         docs/docs/integrations/text_embedding/oci_generative_ai.ipynb)
(2) updating integration page for OCI Generative AI llms
(docs/docs/integrations/llms/oci_generative_ai.ipynb)
(3) adding platform documentation for OCI (@baskaryan request,
docs/docs/integrations/platforms/oci.mdx). this combines the
          integrations of OCI Generative AI & OCI Data Science
(4) if possible, requesting to be added to 'Featured Community
Providers' so supplying a modified
docs/docs/integrations/platforms/index.mdx to reflect the addition
- **Issue:** none

 - **Dependencies:** no new dependencies 

 - **Twitter handle:**

---------

Co-authored-by: MING KANG <ming.kang@oracle.com>
2024-02-13 13:26:23 -08:00
Bagatur
b5d3416563 experimental[patch]: Release 0.0.51 (#17484) 2024-02-13 13:14:38 -08:00
Bagatur
de7c4b277c langchain[patch]: Release 0.1.7 (#17482) 2024-02-13 13:13:04 -08:00
Bagatur
39342d98d6 community[patch]: Release 0.0.20 (#17480) 2024-02-13 13:01:51 -08:00
Bagatur
89b765ec27 core[patch]: Release 0.1.23 (#17479) 2024-02-13 12:55:45 -08:00
Max Jakob
ab3d944667 community[patch]: ElasticsearchStore: preserve user headers (#16830)
Users can provide an Elasticsearch connection with custom headers. This
PR makes sure these headers are preserved when adding the langchain user
agent header.
2024-02-13 12:37:35 -08:00
Erick Friis
112e10e933 infra: azure release integration testing secrets (#17476) 2024-02-13 12:17:06 -08:00
Erick Friis
9eb1b56e73 pinecone[patch]: release 0.0.2 (#17477) 2024-02-13 12:01:45 -08:00
Erick Friis
37678471c4 openai[patch]: relax tiktoken constraint, release 0.0.6 (#17472) 2024-02-13 11:25:55 -08:00
Wendy H. Chun
2df7387c91 langchain[patch]: Fix to avoid infinite loop during collapse chain in map reduce (#16253)
- **Description:** Depending on `token_max` used in
`load_summarize_chain`, it could cause an infinite loop when documents
cannot collapse under `token_max`. This change would not affect the
existing feature, but it also gives an option to users to avoid the
situation.
  - **Issue:** https://github.com/langchain-ai/langchain/issues/16251
  - **Dependencies:** None
  - **Twitter handle:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-13 10:55:32 -08:00
wulixuan
5d06797905 community[minor]: integrate chat models with Yuan2.0 (#16575)
1. integrate chat models with
[`Yuan2.0`](https://github.com/IEIT-Yuan/Yuan-2.0/blob/main/README-EN.md)
2. add a new doc for [Yuan2.0
integration](docs/docs/integrations/llms/yuan2.ipynb)
 
Yuan2.0 is a new generation Fundamental Large Language Model developed
by IEIT System. We have published all three models, Yuan 2.0-102B, Yuan
2.0-51B, and Yuan 2.0-2B.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-13 10:55:14 -08:00
Taha Khabouss
15baffc484 langchain[patch]: Ensure that the Elasticsearch Query Translator functions accurately w… (#17044)
Description:
Addresses a problem where the Date type within an Elasticsearch
SelfQueryRetriever would encounter difficulties in generating a valid
query.

Issue: #17042

---------

Co-authored-by: Max Jakob <max.jakob@elastic.co>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-13 10:54:24 -08:00
Erick Friis
e5c76f9dbd pinecone[patch]: poetry update (#17471) 2024-02-13 10:32:29 -08:00
Erick Friis
10bdf2422c pinecone[patch]: release 0.0.2rc0, remove simsimd dep (#17469) 2024-02-13 10:02:16 -08:00
Erick Friis
065cde69b1 google-genai[patch]: release 0.0.9, safety settings docs (#17432) 2024-02-13 10:01:25 -08:00
Sergey Kozlov
db6f266d97 core: improve None value processing in merge_dicts() (#17462)
- **Description:** fix `None` and `0` merging in `merge_dicts()`, add
tests.
```python
from langchain_core.utils._merge import merge_dicts
assert merge_dicts({"a": None}, {"a": 0}) == {"a": 0}
```

---------

Co-authored-by: Sergey Kozlov <sergey.kozlov@ludditelabs.io>
2024-02-13 08:48:02 -08:00
Ian Gregory
e5472b5eb8 Framework for supporting more languages in LanguageParser (#13318)
## Description

I am submitting this for a school project as part of a team of 5. Other
team members are @LeilaChr, @maazh10, @Megabear137, @jelalalamy. This PR
also has contributions from community members @Harrolee and @Mario928.

Initial context is in the issue we opened (#11229).

This pull request adds:

- Generic framework for expanding the languages that `LanguageParser`
can handle, using the
[tree-sitter](https://github.com/tree-sitter/py-tree-sitter#py-tree-sitter)
parsing library and existing language-specific parsers written for it
- Support for the following additional languages in `LanguageParser`:
  - C
  - C++
  - C#
  - Go
- Java (contributed by @Mario928
https://github.com/ThatsJustCheesy/langchain/pull/2)
  - Kotlin
  - Lua
  - Perl
  - Ruby
  - Rust
  - Scala
- TypeScript (contributed by @Harrolee
https://github.com/ThatsJustCheesy/langchain/pull/1)

Here is the [design
document](https://docs.google.com/document/d/17dB14cKCWAaiTeSeBtxHpoVPGKrsPye8W0o_WClz2kk)
if curious, but no need to read it.

## Issues

- Closes #11229
- Closes #10996
- Closes #8405

## Dependencies

`tree_sitter` and `tree_sitter_languages` on PyPI. We have tried to add
these as optional dependencies.

## Documentation

We have updated the list of supported languages, and also added a
section to `source_code.ipynb` detailing how to add support for
additional languages using our framework.

## Maintainer

- @hwchase17 (previously reviewed
https://github.com/langchain-ai/langchain/pull/6486)

Thanks!!

## Git commits

We will gladly squash any/all of our commits (esp merge commits) if
necessary. Let us know if this is desirable, or if you will be
squash-merging anyway.

<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Maaz Hashmi <mhashmi373@gmail.com>
Co-authored-by: LeilaChr <87657694+LeilaChr@users.noreply.github.com>
Co-authored-by: Jeremy La <jeremylai511@gmail.com>
Co-authored-by: Megabear137 <zubair.alnoor27@gmail.com>
Co-authored-by: Lee Harrold <lhharrold@sep.com>
Co-authored-by: Mario928 <88029051+Mario928@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-13 08:45:49 -08:00
merlin-quix
729c6d6827 docs: add use case for managing chat messages via Apache Kafka (#16771)
Adding a new notebook that demonstrates how to use LangChain's standard
chat features while passing the chat messages back and forth via Apache
Kafka.

This goal is to simulate an architecture where the chat front end and
the LLM are running as separate services that need to communicate with
one another over an internal nework.

It's an alternative to typical pattern of requesting a reponse from the
model via a REST API (there's more info on why you would want to do this
at the end of the notebook).

NOTE: Assuming "uses cases" is the right place for this but feel free to
propose another location.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-13 08:09:15 -08:00
Bagatur
3925071dd6 langchain[patch], templates[patch]: fix multi query retriever, web re… (#17434)
…search retriever

Fixes #17352
2024-02-12 22:52:07 -08:00
Bagatur
c0ce93236a experimental[patch]: fix zero-shot pandas agent (#17442) 2024-02-12 21:58:35 -08:00
Abhishek Jain
37e1275f9e community[patch]: Fixed the 'aembed' method of 'CohereEmbeddings'. (#16497)
**Description:**
- The existing code was trying to find a `.embeddings` property on the
`Coroutine` returned by calling `cohere.async_client.embed`.
- Instead, the `.embeddings` property is present on the value returned
by the `Coroutine`.
- Also, it seems that the original cohere client expects a value of
`max_retries` to not be `None`. Hence, setting the default value of
`max_retries` to `3`.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-12 21:57:27 -08:00
Sridhar Ramaswamy
9f1cbbc6ed community[minor]: Add pebblo safe document loader (#16862)
- **Description:** Pebblo opensource project enables developers to
safely load data to their Gen AI apps. It identifies semantic topics and
entities found in the loaded data and summarizes them in a
developer-friendly report.
  - **Dependencies:** none
  - **Twitter handle:** srics

@hwchase17
2024-02-12 21:56:12 -08:00
Preetam D'Souza
0834457f28 docs: Fix broken link in summarization use-case (#16554)
- **Description:** Fix broken link to `StuffDocumentsChain`
- **Issue:** N/A
- **Dependencies:** None
- **Twitter handle:**
[@preetamdsouza](https://twitter.com/preetamdsouza)
2024-02-12 21:40:57 -08:00
Sheil Naik
d70a5bbf15 docs: Fix broken link in LLMs index.mdx (#16557)
- **Description:** The
[LLMs](https://python.langchain.com/docs/modules/model_io/llms/) page
has a broken link. This fixes the link.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** @sheilnaik
2024-02-12 21:39:56 -08:00
mhavey
1bbb64d956 community[minor], langchian[minor]: Add Neptune Rdf graph and chain (#16650)
**Description**: This PR adds a chain for Amazon Neptune graph database
RDF format. It complements the existing Neptune Cypher chain. The PR
also includes a Neptune RDF graph class to connect to, introspect, and
query a Neptune RDF graph database from the chain. A sample notebook is
provided under docs that demonstrates the overall effect: invoking the
chain to make natural language queries against Neptune using an LLM.

**Issue**: This is a new feature
 
**Dependencies**: The RDF graph class depends on the AWS boto3 library
if using IAM authentication to connect to the Neptune database.

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-12 21:30:20 -08:00
Michael Feil
e1cfd0f3e7 community[patch]: infinity embeddings update incorrect default url (#16759)
The default url has always been incorrect (7797 instead 7997). Here is a
update to the correct url.
2024-02-12 20:05:08 -08:00
Massimiliano Pronesti
df7cbd6fbb community[minor]: add FlashRank ranker (#16785)
**Description:** This PR adds support for
[flashrank](https://github.com/PrithivirajDamodaran/FlashRank) for
reranking as alternative to Cohere.

I'm not sure `libs/langchain` is the right place for this change. At
first, I wanted to put it under `libs/community`. All the compressors
were under `libs/langchain/retrievers/document_compressors` though. Hope
this makes sense!
2024-02-12 20:00:52 -08:00
Andreas Motl
1fdd9bd980 community/SQLDatabase: Generalize and trim software tests (#16659)
- **Description:** Improve test cases for `SQLDatabase` adapter
component, see
[suggestion](https://github.com/langchain-ai/langchain/pull/16655#pullrequestreview-1846749474).
  - **Depends on:** GH-16655
  - **Addressed to:** @baskaryan, @cbornet, @eyurtsev

_Remark: This PR is stacked upon GH-16655, so that one will need to go
in first._

Edit: Thank you for bringing in GH-17191, @eyurtsev. This is a little
aftermath, improving/streamlining the corresponding test cases.
2024-02-12 22:58:34 -05:00
Theo / Taeyoon Kang
1987f905ed core[patch]: Support .yml extension for YAML (#16783)
- **Description:**

[AS-IS] When dealing with a yaml file, the extension must be .yaml.  

[TO-BE] In the absence of extension length constraints in the OS, the
extension of the YAML file is yaml, but control over the yml extension
must still be made.

It's as if it's an error because it's a .jpg extension in jpeg support.

  - **Issue:** - 

  - **Dependencies:**
no dependencies required for this change,
2024-02-12 19:57:20 -08:00
Kapil Sachdeva
cd00a87db7 community[patch] - in FAISS vector store, support passing custom DocStore implementation when using from_xxx methods (#16801)
- **Description:** The from__xx methods of FAISS class have hardcoded
InMemoryStore implementation and thereby not let users pass a custom
DocStore implementation,
  - **Issue:** no referenced issue,
  - **Dependencies:** none,
  - **Twitter handle:** ksachdeva
2024-02-12 19:51:55 -08:00
Chris
f9f5626ca4 community[patch]: Fix github search issues and PRs PaginatedList has no len() error (#16806)
**Description:** 
Bugfix: Langchain_community's GitHub Api wrapper throws a TypeError when
searching for issues and/or PRs (the `search_issues_and_prs` method).
This is because PyGithub's PageinatedList type does not support the
len() method. See https://github.com/PyGithub/PyGithub/issues/1476

![image](https://github.com/langchain-ai/langchain/assets/8849021/57390b11-ed41-4f48-ba50-f3028610789c)
  **Dependencies:** None 
  **Twitter handle**: @ChrisKeoghNZ
  
I haven't registered an issue as it would take me longer to fill the
template out than to make the fix, but I'm happy to if that's deemed
essential.

I've added a simple integration test to cover this as there were no
existing unit tests and it was going to be tricky to set them up.

Co-authored-by: Chris Keogh <chris.keogh@xero.com>
2024-02-12 19:50:59 -08:00
morgana
722aae4fd1 community: add delete method to rocksetdb vectorstore to support recordmanager (#17030)
- **Description:** This adds a delete method so that rocksetdb can be
used with `RecordManager`.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** `@_morgan_adams_`

---------

Co-authored-by: Rockset API Bot <admin@rockset.io>
2024-02-12 19:50:20 -08:00
yin1991
c454dc36fc community[proxy]: Enhancement/add proxy support playwrighturlloader 16751 (#16822)
- **Description:** Enhancement/add proxy support playwrighturlloader
16751
- **Issue:** [Enhancement: Add Proxy Support to PlaywrightURLLoader
Class](https://github.com/langchain-ai/langchain/issues/16751)
  - **Dependencies:** 
  - **Twitter handle:** @ootR77013489

---------

Co-authored-by: root <root@ip-172-31-46-160.ap-southeast-1.compute.internal>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-12 19:48:29 -08:00
Bhupesh Varshney
e3b775e035 infra: make .gitignore consistent with standard python gitignore (#16828)
- The new .gitignore version is inherited from the one maintained by the
github community over at
https://github.com/github/gitignore/blob/main/Python.gitignore
- This should cover all the cases of how a langchain app can be used.
2024-02-12 19:43:41 -08:00
James Braza
64938ae6f2 infra: unit testing check_package_version (#16825)
Wrote a unit test for `check_package_version` in the core package.

Note that this is a revival of
https://github.com/langchain-ai/langchain/pull/16387 after GitHub
incident (see
https://github.com/langchain-ai/langchain/discussions/16796).
2024-02-12 19:39:58 -08:00
Max Jakob
604e117411 docs: another auth method for ElasticsearchStore (#16831)
Users can also use their own Elasticsearch client object to configure
the connection.
2024-02-12 19:29:54 -08:00
Zeeland
4986e7227e docs: rm unnecessary imports (#16876)
- **Description:** optimize the document of memory usage
  - **Issue:** it lose some install guide
2024-02-12 19:25:54 -08:00
Lingzhen Chen
30af711c34 community[patch]: update AzureSearch class to work with azure-search-documents=11.4.0 (#15659)
- **Description:** Updates
`libs/community/langchain_community/vectorstores/azuresearch.py` to
support the stable version `azure-search-documents=11.4.0`
- **Issue:** https://github.com/langchain-ai/langchain/issues/14534,
https://github.com/langchain-ai/langchain/issues/15039,
https://github.com/langchain-ai/langchain/issues/15355
  - **Dependencies:** azure-search-documents>=11.4.0

---------

Co-authored-by: Clément Tamines <Skar0@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-12 19:23:35 -08:00
Robby
e135dc70c3 community[patch]: Invoke callback prior to yielding token (#17348)
**Description:** Invoke callback prior to yielding token in stream
method for Ollama.
**Issue:** [Callback for on_llm_new_token should be invoked before the
token is yielded by the model
#16913](https://github.com/langchain-ai/langchain/issues/16913)

Co-authored-by: Robby <h0rv@users.noreply.github.com>
2024-02-12 19:22:55 -08:00
Christophe Bornet
ab025507bc community[patch]: Add async methods to VectorStoreQATool (#16949) 2024-02-12 19:19:50 -08:00
Christophe Bornet
fb7552bfcf Add async methods to InMemoryCache (#17425)
Add async methods to InMemoryCache
2024-02-12 22:02:38 -05:00
Eugene Yurtsev
93472ee9e6 core[patch]: Replace memory stream implementation used by LogStreamCallbackHandler (#17185)
This PR replaces the memory stream implementation used by the 
LogStreamCallbackHandler.

This implementation resolves an issue in which streamed logs and
streamed events originating from sync code would arrive only after the
entire sync code would finish execution (rather than arriving in real
time as they're generated).

One example is if trying to stream tokens from an llm within a tool. If
the tool was an async tool, but the llm was invoked via stream (sync
variant) rather than astream (async variant), then the tokens would fail
to stream in real time and would all arrived bunched up after the tool
invocation completed.
2024-02-12 21:57:38 -05:00
yin1991
37ef6ac113 community[patch]: Add Pagination to GitHubIssuesLoader for Efficient GitHub Issues Retrieval (#16934)
- **Description:** Add Pagination to GitHubIssuesLoader for Efficient
GitHub Issues Retrieval
- **Issue:** [the issue # it fixes if
applicable,](https://github.com/langchain-ai/langchain/issues/16864)

---------

Co-authored-by: root <root@ip-172-31-46-160.ap-southeast-1.compute.internal>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-12 18:30:36 -08:00
Leonid Ganeline
b87d6f9f48 docs: Redis page update (#16906)
- Reordered sections
- Applied consistent formatting
- Fixed headers (there were 2 H1 headers; this breaks CoT)
- Added `Settings` header and moved all related sections under it
2024-02-12 18:23:35 -08:00
Bagatur
22638e5927 community[patch]: give reranker default client val (#17289) 2024-02-12 17:21:53 -08:00
Naveenkhasyap
841e5f514e docs: Updated doc for integrations/chat/anthropic_functions #15664 (#17226)
Description: Updated doc for integrations/chat/anthropic_functions with
new functions: invoke. Changed structure of the document to match the
required one.
Issue: https://github.com/langchain-ai/langchain/issues/15664
Dependencies: None
Twitter handle: None

---------

Co-authored-by: NaveenMaltesh <naveen@onmeta.in>
2024-02-12 17:09:38 -08:00
Robby
ece4b43a81 community[patch]: doc loaders mypy fixes (#17368)
**Description:** Fixed `type: ignore`'s for mypy for some
document_loaders.
**Issue:** [Remove "type: ignore" comments #17048
](https://github.com/langchain-ai/langchain/issues/17048)

---------

Co-authored-by: Robby <h0rv@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-12 16:51:06 -08:00
Robby
0653aa469a community[patch]: Invoke callback prior to yielding token (#17346)
**Description:** Invoke callback prior to yielding token in stream
method for watsonx.
**Issue:** [Callback for on_llm_new_token should be invoked before the
token is yielded by the model
#16913](https://github.com/langchain-ai/langchain/issues/16913)

Co-authored-by: Robby <h0rv@users.noreply.github.com>
2024-02-12 16:36:33 -08:00
Min-Seong Lee
ce9a68791b docs: fix typo in question_answering quickstart.ipynb (#17393)
- **Description:** typo in docs (facillitate -> facilitate)
  - **Issue:** Typo
  - **Dependencies:** Nope
  - **Twitter handle:** None
2024-02-12 16:33:47 -08:00
Pennlaine
e1bc623f8f docs: Updated docs for sitemap loader to use correct URL (#17395)
- **Description:** 
Updated URL for sitemap loader from
"https://langchain.readthedocs.io/sitemap.xml" to
"https://api.python.langchain.com/sitemap.xml"
  - **Issue:** Fixes #17236
2024-02-12 16:20:32 -08:00
Bagatur
bd0ad6637a infra: pr template nit (#17438) 2024-02-12 16:19:14 -08:00
Bagatur
37629516cd infra: update pr template (#17437) 2024-02-12 16:17:30 -08:00
Ikko Eltociear Ashimine
b48fa8b695 docs: fix typo in vikingdb.ipynb (#17429)
retreival -> retrieval
2024-02-12 15:51:12 -08:00
Bagatur
f7e453971d community[patch]: remove print (#17435) 2024-02-12 15:21:38 -08:00
Spencer Kelly
54fa78c887 community[patch]: fixed vector similarity filtering (#16967)
**Description:** changed filtering so that failed filter doesn't add
document to results. Currently filtering is entirely broken and all
documents are returned whether or not they pass the filter.

fixes issue introduced in
https://github.com/langchain-ai/langchain/pull/16190
2024-02-12 14:52:57 -08:00
Aditya
a23c719c8b google-genai[minor]: add safety settings (#16836)
Replace this entire comment with:
- **Description:Expose safety_settings for Gemini integrations on
google-generativeai
  - **Issue:NA,
  - **Dependencies:NA
  - **Twitter handle:@aditya_rane

@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-12 13:44:24 -08:00
Abhijeeth Padarthi
584b647b96 community[minor]: AWS Athena Document Loader (#15625)
- **Description:** Adds the document loader for [AWS
Athena](https://aws.amazon.com/athena/), a serverless and interactive
analytics service.
  - **Dependencies:** Added boto3 as a dependency
2024-02-12 12:53:40 -08:00
david-tempelmann
93da18b667 community[minor]: Add mmr and similarity_score_threshold retrieval to DatabricksVectorSearch (#16829)
- **Description:** This PR adds support for `search_types="mmr"` and
`search_type="similarity_score_threshold"` to retrievers using
`DatabricksVectorSearch`,
  - **Issue:** 
  - **Dependencies:**
  - **Twitter handle:**

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-12 12:51:37 -08:00
Erick Friis
42648061ad openai[patch]: code cleaning (#17355)
h/t @tdene for finding cleanup op in #17047
2024-02-12 12:36:12 -08:00
Harrison Chase
a9d6da609a add self discover notebook (#17387) 2024-02-12 09:38:43 -08:00
ByeongUk Choi
ac970c9497 Update Docs for TFIDFRetriever Import Path (#17322)
This PR updates the `TF-IDF.ipynb` documentation to reflect the new
import path for TFIDFRetriever in the langchain-community package. The
previous path, `from langchain.retrievers import TFIDFRetriever`, has
been updated to `from langchain_community.retrievers import
TFIDFRetriever` to align with the latest changes in the langchain
library.
2024-02-11 21:26:08 -08:00
Michael Hunger
1c902ce3d1 tools:docs: update google_search.ipynb - change tool name (#17354)
according to https://youtu.be/rZus0JtRqXE?si=aFo1JTDnu5kSEiEN&t=678 by
@efriis

- **Description:** Seems the requirements for tool names have changed
and spaces are no longer allowed. Changed the tool name from Google
Search to google_search in the notebook
  - **Issue:** n/a
  - **Dependencies:** none
  - **Twitter handle:** @mesirii
2024-02-11 21:25:19 -08:00
Massimiliano Pronesti
3894b4d9a5 community: add gpt-4-turbo and gpt-4-0125 costs (#17349)
Ref: https://openai.com/pricing
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-02-11 21:24:24 -08:00
jiangzf93
d6a1c88ca7 docs: update documentation for file system tool integration (#17377)
- **Description:** Update the docs for the tool integration module `file
system`
- **Issue:** [For New Contributors: Update Integration Documentation
#15664](https://github.com/langchain-ai/langchain/issues/15664#top)
  - **Dependencies:** N/A
2024-02-11 21:19:40 -08:00
Pennlaine
2384267900 Updated doc for tools/pubmed with new functions: invoke. (#17378)
Updated doc for integrations/chat/anthropic_functions #15664 

  - **Description:**
Adds `pip install` instructions
Update `run` with `invoke`

  - **Issue:** 
Fixes #15664
2024-02-11 21:19:31 -08:00
Tomaz Bratanic
19a1c9183d Improve graph cypher qa prompt (#17380)
Unlike vector results, the LLM has to completely trust the context of a
graph database result, even if it doesn't provide whole context. We
tried with instructions, but it seems that adding a single example is
the way to go to solve this issue.
2024-02-11 21:15:46 -08:00
Sandeep Banerjee
183daa6e6f google-genai[patch]: on_llm_new_token fix (#16924)
### This pull request makes the following changes:
* Fixed issue #16913

Fixed the google gen ai chat_models.py code to make sure that the
callback is called before the token is yielded

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-09 18:00:24 -08:00
Bagatur
10c10f2dea cli[patch]: integration template nits (#14691)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-09 17:59:34 -08:00
Erick Friis
99540d3d75 infra: no print in newer partner packages (#17353)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-02-09 16:40:02 -08:00
William FH
7c03cc5ed4 Support serialization when inputs/outputs contain generators (#17338)
Pydantic's `dict()` function raises an error here if you pass in a
generator. We have a more robust serialization function in lagnsmith
that we will use instead.
2024-02-09 16:24:54 -08:00
Erick Friis
3a2eb6e12b infra: add print rule to ruff (#16221)
Added noqa for existing prints. Can slowly remove / will prevent more
being intro'd
2024-02-09 16:13:30 -08:00
Jael Gu
c07c0da01a community[patch]: Fix Milvus add texts when ids=None (#17021)
- **Description:** Fix Milvus add texts when ids=None (auto_id=True)

Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-09 18:48:37 -05:00
Quang Hoa
54c1fb3f25 community[patch]: Make some functions work with Milvus (#10695)
**Description**
Make some functions work with Milvus:
1. get_ids: Get primary keys by field in the metadata
2. delete: Delete one or more entities by ids
3. upsert: Update/Insert one or more entities

**Issue**
None
**Dependencies**
None
**Tag maintainer:**
@hwchase17 
**Twitter handle:**
None

---------

Co-authored-by: HoaNQ9 <hoanq.1811@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-09 15:21:31 -08:00
kYLe
c9999557bf community[patch]: Modify LLMs/Anyscale work with OpenAI API v1 (#14206)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes (if applicable),
  - **Dependencies:** any dependencies required for this change,
- **Tag maintainer:** for a quicker response, tag the relevant
maintainer (see below),
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` to check this
locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc:

https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in `docs/extras`
directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
- **Description:** 
1. Modify LLMs/Anyscale to work with OAI v1
2. Get rid of openai_ prefixed variables in Chat_model/ChatAnyscale
3. Modify `anyscale_api_base` to `anyscale_base_url` to follow OAI name
convention (reverted)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-09 15:11:18 -08:00
Charlie Marsh
24c0bab57b infra, multiple: Upgrade configuration for Ruff v0.2.0 (#16905)
## Summary

This PR upgrades LangChain's Ruff configuration in preparation for
Ruff's v0.2.0 release. (The changes are compatible with Ruff v0.1.5,
which LangChain uses today.) Specifically, we're now warning when
linter-only options are specified under `[tool.ruff]` instead of
`[tool.ruff.lint]`.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-09 14:28:02 -08:00
Bagatur
01409add5a google-vertexai[patch]: rm deps (#17077)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-09 14:12:10 -08:00
Erick Friis
d9e7675f7e templates: gemini-functions-agent readme update (#17288) 2024-02-09 14:10:23 -08:00
Erick Friis
1c2facf88d nvidia-ai-endpoints[patch]: release 0.0.3 (#17345) 2024-02-09 13:55:01 -08:00
Vadim Kudlay
5f9ac6986e nvidia-ai-endpoints[patch]: model arguments (e.g. temperature) on construction bug (#17290)
- **Issue:** Issue with model argument support (been there for a while
actually):
- Non-specially-handled arguments like temperature don't work when
passed through constructor.
- Such arguments DO work quite well with `bind`, but also do not abide
by field requirements.
- Since initial push, server-side error messages have gotten better and
v0.0.2 raises better exceptions. So maybe it's better to let server-side
handle such issues?
- **Description:**
- Removed ChatNVIDIA's argument fields in favor of
`model_kwargs`/`model_kws` arguments which aggregates constructor kwargs
(from constructor pathway) and merges them with call kwargs (bind
pathway).
- Shuffled a few functions from `_NVIDIAClient` to `ChatNVIDIA` to
streamline construction for future integrations.
- Minor/Optional: Old services didn't have stop support, so client-side
stopping was implemented. Now do both.
- **Any Breaking Changes:** Minor breaking changes if you strongly rely
on chat_model.temperature, etc. This is captured by
chat_model.model_kwargs.

PR passes tests and example notebooks and example testing. Still gonna
chat with some people, so leaving as draft for now.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-09 13:46:02 -08:00
Leonid Ganeline
932c52c333 community[patch]: docstrings (#16810)
- added missed docstrings
- formated docstrings to the consistent form
2024-02-09 12:48:57 -08:00
Leonid Ganeline
ae66bcbc10 core[patch]: docstring update (#16813)
- added missed docstrings
- formated docstrings to consistent form
2024-02-09 12:47:41 -08:00
Eugene Yurtsev
e10030e241 core[patch]: Add unit test to cover different streaming format for json parsing (#17063)
Add unit test to cover this issue:

https://github.com/langchain-ai/langchain/issues/16423

which was resolved by this PR:

https://github.com/langchain-ai/langchain/pull/16670/files

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-09 11:28:55 -05:00
Kononov Pavel
15bc201967 langchain_community: Fix typo bug (#17324)
Problem from #17095

This error wasn't in the v1.4.0
2024-02-09 11:27:33 -05:00
Eugene Yurtsev
344a227b5b CI: Update documentation template (#17325)
Update the documentation template
2024-02-09 11:27:18 -05:00
Erick Friis
023cb59e8a templates: gemini-functions-agent genai package bump (#17286) 2024-02-08 19:47:58 -08:00
Erick Friis
e660a1685b google-genai[patch]: release 0.0.8 (#17285) 2024-02-08 19:39:44 -08:00
Erick Friis
12d3159dd6 templates: simplify tool in gemini-functions-agent 2 (#17283) 2024-02-08 19:39:29 -08:00
Erick Friis
febf9540b9 google-genai[patch]: fix tool format, use protos (#17284) 2024-02-08 19:36:49 -08:00
Erick Friis
d8913b9428 templates: simplify tool in gemini-functions-agent (#17282) 2024-02-08 19:09:27 -08:00
German Martin
1032faba5f langchain_google_genai : Add missing _identifying_params property. (#17224)
Description: Missing _identifying_params create issues when dealing with
callbacks to get current run model parameters.
All other model partners implementation provide this property and also
provide _default_params. I'm not sure about the default values to
include or if we can re-use the same as for _VertexAICommon(), this
change allows you to access the model parameters correctly.
Issue: Not exactly this issue but could be related
https://github.com/langchain-ai/langchain/issues/14711
Twitter handle:@musicaoriginal2
2024-02-08 17:40:21 -08:00
Erick Friis
e4da7918f3 google-genai[patch]: fix streaming, function calling (#17268) 2024-02-08 17:29:53 -08:00
Ruben Hakopian
96b5711a0c google-vertexai[patch]: Fixed SafetySettings handling in streaming API in VertexAI (#17278)
The streaming API doesn't separate safety_settings from the
generation_config payload. As the result the following error is observed
when using `stream` API. The functionality is correct with `invoke` API.

The fix separates the `safety_settings` from params and sets it as
argument to the `send_message` method.

```
ERROR:         Unknown field for GenerationConfig: safety_settings
Traceback (most recent call last):
  File "/Users/user/Library/Caches/pypoetry/virtualenvs/chatbot-worker-main-Ju-qIM-X-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 250, in stream
    raise e
  File "/Users/user/Library/Caches/pypoetry/virtualenvs/chatbot-worker-main-Ju-qIM-X-py3.12/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 234, in stream
    for chunk in self._stream(
  File "/Users/user/Library/Caches/pypoetry/virtualenvs/chatbot-worker-main-Ju-qIM-X-py3.12/lib/python3.12/site-packages/langchain_google_vertexai/chat_models.py", line 501, in _stream
    for response in responses:
  File "/Users/user/Library/Caches/pypoetry/virtualenvs/chatbot-worker-main-Ju-qIM-X-py3.12/lib/python3.12/site-packages/vertexai/generative_models/_generative_models.py", line 921, in _send_message_streaming
    for chunk in stream:
  File "/Users/user/Library/Caches/pypoetry/virtualenvs/chatbot-worker-main-Ju-qIM-X-py3.12/lib/python3.12/site-packages/vertexai/generative_models/_generative_models.py", line 514, in _generate_content_streaming
    request = self._prepare_request(
              ^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/user/Library/Caches/pypoetry/virtualenvs/chatbot-worker-main-Ju-qIM-X-py3.12/lib/python3.12/site-packages/vertexai/generative_models/_generative_models.py", line 256, in _prepare_request
    gapic_generation_config = gapic_content_types.GenerationConfig(
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/user/Library/Caches/pypoetry/virtualenvs/chatbot-worker-main-Ju-qIM-X-py3.12/lib/python3.12/site-packages/proto/message.py", line 576, in __init__
    raise ValueError(
ValueError: Unknown field for GenerationConfig: safety_settings
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-08 17:25:28 -08:00
Kartheek Yakkala
b18c6ab9ad docs: Added LangGraph in framework parts of readme file (#17279)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-02-08 17:19:47 -08:00
Bagatur
65e97c9b53 infra: mv SQLDatabase tests to community (#17276) 2024-02-08 17:05:43 -08:00
Bagatur
72c7af0bc0 langchain[patch]: undo redis cache import (#17275) 2024-02-08 16:39:55 -08:00
Bagatur
8bad4157ad langchain[patch]: Release 0.1.6 (#17133) 2024-02-08 16:25:06 -08:00
Bagatur
7fa4dc593f core[patch]: Release 0.1.22 (#17274) 2024-02-08 16:13:33 -08:00
Bagatur
02ef9164b5 langchain[patch]: expose cohere rerank score, add parent doc param (#16887) 2024-02-08 16:07:18 -08:00
Bagatur
35c1bf339d infra: rm boto3, gcaip from pyproject (#17270) 2024-02-08 15:28:22 -08:00
Leonid Ganeline
389b055bd6 docs: Toolkits menu (#16217)
The Integrations `Toolkits` menu was named as [`Agents and
toolkits`](https://python.langchain.com/docs/integrations/toolkits).
This name has a historical reason that is not correct anymore. Now this
menu is all about community `Toolkits`. There is a separate menu for
[Agents](https://python.langchain.com/docs/modules/agents/). Also Agents
are officially not part of Integrations (Community package) but part of
LangChain package.
2024-02-08 14:52:26 -08:00
Alex
de5e96b5f9 community[patch]: updated openai prices in mapping (#17009)
- **Description:** there are january prices update for chatgpt
[blog](https://openai.com/blog/new-embedding-models-and-api-updates),
also there are updates on their website on page
[pricing](https://openai.com/pricing)
- **Issue:** N/A

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-08 14:43:44 -08:00
Mohammad Mohtashim
e35c7fa3b2 [Langchain_core]: Added Docstring for RunnableConfigurableAlternatives (#17263)
I noticed that RunnableConfigurableAlternatives which is an important
composition in LCEL has no Docstring. Therefore I added the detailed
Docstring for it.
@baskaryan, @eyurtsev, @hwchase17 please have a look and let me if the
docstring is looking good.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-08 17:05:33 -05:00
Armin Stepanyan
641efcf41c community: add runtime kwargs to HuggingFacePipeline (#17005)
This PR enables changing the behaviour of huggingface pipeline between
different calls. For example, before this PR there's no way of changing
maximum generation length between different invocations of the chain.
This is desirable in cases, such as when we want to scale the maximum
output size depending on a dynamic prompt size.

Usage example:

```python
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
hf = HuggingFacePipeline(pipeline=pipe)

hf("Say foo:", pipeline_kwargs={"max_new_tokens": 42})
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-08 13:58:31 -08:00
Scott Nath
a32798abd7 community: Add you.com utility, update you retriever integration docs (#17014)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

- **Description: changes to you.com files** 
    - general cleanup
- adds community/utilities/you.py, moving bulk of code from retriever ->
utility
    - removes `snippet` as endpoint
    - adds `news` as endpoint
    - adds more tests

<s>**Description: update community MAKE file** 
    - adds `integration_tests`
    - adds `coverage`</s>

- **Issue:** the issue # it fixes if applicable,
- [For New Contributors: Update Integration
Documentation](https://github.com/langchain-ai/langchain/issues/15664#issuecomment-1920099868)
- **Dependencies:** n/a
- **Twitter handle:** @scottnath
- **Mastodon handle:** scottnath@mastodon.social

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-08 13:47:50 -08:00
joelsprunger
3984f6604f langchain: adds recursive json splitter (#17144)
- **Description:** This adds a recursive json splitter class to the
existing text_splitters as well as unit tests
- **Issue:** splitting text from structured data can cause issues if you
have a large nested json object and you split it as regular text you may
end up losing the structure of the json. To mitigate against this you
can split the nested json into large chunks and overlap them, but this
causes unnecessary text processing and there will still be times where
the nested json is so big that the chunks get separated from the parent
keys.

As an example you wouldn't want the following to be split in half:
```shell
{'val0': 'DFWeNdWhapbR',
 'val1': {'val10': 'QdJo',
          'val11': 'FWSDVFHClW',
          'val12': 'bkVnXMMlTiQh',
          'val13': 'tdDMKRrOY',
          'val14': 'zybPALvL',
          'val15': 'JMzGMNH',
          'val16': {'val160': 'qLuLKusFw',
                    'val161': 'DGuotLh',
                    'val162': 'KztlcSBropT',
-----------------------------------------------------------------------split-----
                    'val163': 'YlHHDrN',
                    'val164': 'CtzsxlGBZKf',
                    'val165': 'bXzhcrWLmBFp',
                    'val166': 'zZAqC',
                    'val167': 'ZtyWno',
                    'val168': 'nQQZRsLnaBhb',
                    'val169': 'gSpMbJwA'},
          'val17': 'JhgiyF',
          'val18': 'aJaqjUSFFrI',
          'val19': 'glqNSvoyxdg'}}
```
Any llm processing the second chunk of text may not have the context of
val1, and val16 reducing accuracy. Embeddings will also lack this
context and this makes retrieval less accurate.

Instead you want it to be split into chunks that retain the json
structure.
```shell
{'val0': 'DFWeNdWhapbR',
 'val1': {'val10': 'QdJo',
          'val11': 'FWSDVFHClW',
          'val12': 'bkVnXMMlTiQh',
          'val13': 'tdDMKRrOY',
          'val14': 'zybPALvL',
          'val15': 'JMzGMNH',
          'val16': {'val160': 'qLuLKusFw',
                    'val161': 'DGuotLh',
                    'val162': 'KztlcSBropT',
                    'val163': 'YlHHDrN',
                    'val164': 'CtzsxlGBZKf'}}}
```
and
```shell
{'val1':{'val16':{
                    'val165': 'bXzhcrWLmBFp',
                    'val166': 'zZAqC',
                    'val167': 'ZtyWno',
                    'val168': 'nQQZRsLnaBhb',
                    'val169': 'gSpMbJwA'},
          'val17': 'JhgiyF',
          'val18': 'aJaqjUSFFrI',
          'val19': 'glqNSvoyxdg'}}
```
This recursive json text splitter does this. Values that contain a list
can be converted to dict first by using split(... convert_lists=True)
otherwise long lists will not be split and you may end up with chunks
larger than the max chunk.

In my testing large json objects could be split into small chunks with 
   Increased question answering accuracy
 The ability to split into smaller chunks meant retrieval queries can
use fewer tokens


- **Dependencies:** json import added to text_splitter.py, and random
added to the unit test
  - **Twitter handle:** @joelsprunger

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-08 13:45:34 -08:00
Schalkje
f0ada1a396 docs: Update quickstart.mdx - Fix 422 error in example with LangServe client code (#17163)
**Description:**: Fix 422 error in example with LangServe client code

httpx.HTTPStatusError: Client error '422 Unprocessable Entity' for url
'http://localhost:8000/agent/invoke'
2024-02-08 13:35:39 -08:00
Leonid Kuligin
1862900078 google-genai[patch]: added parsing of function call / response (#17245) 2024-02-08 13:34:46 -08:00
Cailin Wang
a210a8bc53 langchain[patch]: Fix create_retriever_tool missing on_retriever_end Document content (#16933)
- **Description:** In create_retriever_tool create_tool, fix
create_retriever_tool's missing Document content for on_retriever_end,
caused by create_retriever_tool's missing callbacks parameter,
  - **Twitter handle:** @CailinWang_

---------

Co-authored-by: root <root@Bluedot-AI>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-08 13:18:43 -08:00
Kartheek Yakkala
3a22157d92 docs: Added LCEL for alibabacloud and anyscale (#17252)
---------

Co-authored-by: KARTHEEK YAKKALA <kartheekyakkala@KARTHEEKs-Air.lan>
Co-authored-by: KARTHEEK YAKKALA <kartheekyakkala.se@gmail.com>
2024-02-08 13:18:09 -08:00
Sparsh Jain
a2167614b7 google-genai[patch]: Invoke callback prior to yielding token (#17092)
- **Description:** Invoke callback prior to yielding token in stream and
astream methods for Google-genai,
  - **Issue:** the issue # 16913,
  - **Twitter handle:** Sparsh10649446

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-08 13:13:46 -08:00
Liang Zhang
7306600e2f community[patch]: Support SerDe transform functions in Databricks LLM (#16752)
**Description:** Databricks LLM does not support SerDe the
transform_input_fn and transform_output_fn. After saving and loading,
the LLM will be broken. This PR serialize these functions into a hex
string using pickle, and saving the hex string in the yaml file. Using
pickle to serialize a function can be flaky, but this is a simple
workaround that unblocks many use cases. If more sophisticated SerDe is
needed, we can improve it later.

Test:
Added a simple unit test.
I did manual test on Databricks and it works well.
The saved yaml looks like:
```
llm:
      _type: databricks
      cluster_driver_port: null
      cluster_id: null
      databricks_uri: databricks
      endpoint_name: databricks-mixtral-8x7b-instruct
      extra_params: {}
      host: e2-dogfood.staging.cloud.databricks.com
      max_tokens: null
      model_kwargs: null
      n: 1
      stop: null
      task: null
      temperature: 0.0
      transform_input_fn: 80049520000000000000008c085f5f6d61696e5f5f948c0f7472616e73666f726d5f696e7075749493942e
      transform_output_fn: null
```

@baskaryan

```python
from langchain_community.embeddings import DatabricksEmbeddings
from langchain_community.llms import Databricks
from langchain.chains import RetrievalQA
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
import mlflow

embeddings = DatabricksEmbeddings(endpoint="databricks-bge-large-en")

def transform_input(**request):
  request["messages"] = [
    {
      "role": "user",
      "content": request["prompt"]
    }
  ]
  del request["prompt"]
  return request

llm = Databricks(endpoint_name="databricks-mixtral-8x7b-instruct", transform_input_fn=transform_input)

persist_dir = "faiss_databricks_embedding"

# Create the vector db, persist the db to a local fs folder
loader = TextLoader("state_of_the_union.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
db = FAISS.from_documents(docs, embeddings)
db.save_local(persist_dir)

def load_retriever(persist_directory):
    embeddings = DatabricksEmbeddings(endpoint="databricks-bge-large-en")
    vectorstore = FAISS.load_local(persist_directory, embeddings)
    return vectorstore.as_retriever()

retriever = load_retriever(persist_dir)
retrievalQA = RetrievalQA.from_llm(llm=llm, retriever=retriever)
with mlflow.start_run() as run:
    logged_model = mlflow.langchain.log_model(
        retrievalQA,
        artifact_path="retrieval_qa",
        loader_fn=load_retriever,
        persist_dir=persist_dir,
    )

# Load the retrievalQA chain
loaded_model = mlflow.pyfunc.load_model(logged_model.model_uri)
print(loaded_model.predict([{"query": "What did the president say about Ketanji Brown Jackson"}]))

```
2024-02-08 13:09:50 -08:00
cjpark-data
ce22e10c4b community[patch]: Fix KeyError 'embedding' (MongoDBAtlasVectorSearch) (#17178)
- **Description:**
Embedding field name was hard-coded named "embedding".
So I suggest that change `res["embedding"]` into
`res[self._embedding_key]`.
  - **Issue:** #17177,
- **Twitter handle:**
[@bagcheoljun17](https://twitter.com/bagcheoljun17)
2024-02-08 12:06:42 -08:00
Neli Hateva
9bb5157a3d langchain[patch], community[patch]: Fixes in the Ontotext GraphDB Graph and QA Chain (#17239)
- **Description:** Fixes in the Ontotext GraphDB Graph and QA Chain
related to the error handling in case of invalid SPARQL queries, for
which `prepareQuery` doesn't throw an exception, but the server returns
400 and the query is indeed invalid
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** @OntotextGraphDB
2024-02-08 12:05:43 -08:00
ByeongUk Choi
b88329e9a5 community[patch]: Implement Unique ID Enforcement in FAISS (#17244)
**Description:**
Implemented unique ID validation in the FAISS component to ensure all
document IDs are distinct. This update resolves issues related to
non-unique IDs, such as inconsistent behavior during deletion processes.
2024-02-08 12:03:33 -08:00
Jorge Campo
88609565a3 docs: Fix typo in github.ipynb (#17259)
'agiven' -> 'a given'
2024-02-08 12:03:00 -08:00
Bagatur
852973d616 langchain[minor], core[minor]: update json, pydantic parser. add openai-json structured output runnable (#16914) 2024-02-08 11:59:06 -08:00
hsuyuming
e22c4d4eb0 google-vertexai[patch]: fix _parse_response_candidate issue (#16647)
**Description:** enable _parse_response_candidate to support complex
structure format.
  **Issue:** 
currently, if Gemini response complex args format, people will get
"TypeError: Object of type RepeatedComposite is not JSON serializable"
error from _parse_response_candidate.
  
 response candidate example
```
content {
  role: "model"
  parts {
    function_call {
      name: "Information"
      args {
        fields {
          key: "people"
          value {
            list_value {
              values {
                string_value: "Joe is 30, his mom is Martha"
              }
            }
          }
        }
      }
    }
  }
}
finish_reason: STOP
safety_ratings {
  category: HARM_CATEGORY_HARASSMENT
  probability: NEGLIGIBLE
}
safety_ratings {
  category: HARM_CATEGORY_HATE_SPEECH
  probability: NEGLIGIBLE
}
safety_ratings {
  category: HARM_CATEGORY_SEXUALLY_EXPLICIT
  probability: NEGLIGIBLE
}
safety_ratings {
  category: HARM_CATEGORY_DANGEROUS_CONTENT
  probability: NEGLIGIBLE
}
```
 
error msg:
```
Traceback (most recent call last):
  File "/home/jupyter/user/abehsu/gemini_langchain_tools/example2.py", line 36, in <module>
    print(tagging_chain.invoke({"input": "Joe is 30, his mom is Martha"}))
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2053, in invoke
    input = step.invoke(
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3887, in invoke
    return self.bound.invoke(
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 165, in invoke
    self.generate_prompt(
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 543, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 407, in generate
    raise e
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 397, in generate
    self._generate_with_cache(
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 576, in _generate_with_cache
    return self._generate(
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/site-packages/langchain_google_vertexai/chat_models.py", line 406, in _generate
    generations = [
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/site-packages/langchain_google_vertexai/chat_models.py", line 408, in <listcomp>
    message=_parse_response_candidate(c),
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/site-packages/langchain_google_vertexai/chat_models.py", line 280, in _parse_response_candidate
    function_call["arguments"] = json.dumps(
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/json/encoder.py", line 199, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/json/encoder.py", line 257, in iterencode
    return _iterencode(o, 0)
  File "/opt/conda/envs/gemini_langchain_tools/lib/python3.10/json/encoder.py", line 179, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type RepeatedComposite is not JSON serializable
```
  

  **Twitter handle:**  @abehsu1992626
2024-02-08 11:48:25 -08:00
Erick Friis
d77bb7b4e9 google-vertexai[patch]: integration test fix, release 0.0.5 (#17258) 2024-02-08 11:45:33 -08:00
Aditya
98176ac982 langchain_google_vertexai : added logic to override get_num_tokens_from_messages() for ChatVertexAI (#16784)
<!-- Thank you for contributing to LangChain!

Replace this entire comment with:
- **Description: added logic to override get_num_tokens_from_messages()
for ChatVertexAI. Currently ChatVertexAI was inheriting
get_num_tokens_from_messages() from BaseChatModel which in-turn was
calling GPT-2 tokenizer
  - **Issue: NA
  - **Dependencies: NA
  - **Twitter handle:@aditya_rane

@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
Co-authored-by: Leonid Kuligin <lkuligin@yandex.ru>
2024-02-08 11:30:42 -08:00
Bagatur
00a09e1b71 docs: use PromptTemplate.from_template (#17218)
Ran
```python
import glob
import re

def update_prompt(x):
    return re.sub(
        r"(?P<start>\b)PromptTemplate\(template=(?P<template>.*), input_variables=(?:.*)\)",
        "\g<start>PromptTemplate.from_template(\g<template>)",
        x
    )


for fn in glob.glob("docs/**/*", recursive=True):
    try:
        content = open(fn).readlines()
    except:
        continue
    content = [update_prompt(l) for l in content]
    with open(fn, "w") as f:
        f.write("".join(content))
```
2024-02-07 19:52:42 -08:00
sana-google
7f55c95790 docs: add missing link to Quickstart (#17085)
Replace this entire comment with:
- **Description:** Added missing link for Quickstart in Model IO
documentation,
  - **Issue:** N/A,
  - **Dependencies:** N/A,
  - **Twitter handle:** N/A

<!--
If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-07 22:26:10 -05:00
Bassem Yacoube
4e3ed7f043 community[patch]: octoai embeddings bug fix (#17216)
fixes a bug in octoa_embeddings provider
2024-02-07 22:25:52 -05:00
Eugene Yurtsev
780e84ae79 community[minor]: SQLDatabase Add fetch mode cursor, query parameters, query by selectable, expose execution options, and documentation (#17191)
- **Description:** Improve `SQLDatabase` adapter component to promote
code re-use, see
[suggestion](https://github.com/langchain-ai/langchain/pull/16246#pullrequestreview-1846590962).
  - **Needed by:** GH-16246
  - **Addressed to:** @baskaryan, @cbornet 

## Details
- Add `cursor` fetch mode
- Accept SQL query parameters
- Accept both `str` and SQLAlchemy selectables as query expression
- Expose `execution_options`
- Documentation page (notebook) about `SQLDatabase` [^1]
See [About
SQLDatabase](https://github.com/langchain-ai/langchain/blob/c1c7b763/docs/docs/integrations/tools/sql_database.ipynb).

[^1]: Apparently there hasn't been any yet?

---------

Co-authored-by: Andreas Motl <andreas.motl@crate.io>
2024-02-07 22:23:43 -05:00
Tomaz Bratanic
7e4b676d53 community[patch]: Better error propagation for neo4jgraph (#17190)
There are other errors that could happen when refreshing the schema, so
we want to propagate specific errors for more clarity
2024-02-07 22:16:14 -05:00
Leonid Ganeline
d903fa313e docs: titles fix (#17206)
Several notebooks have Title != file name. That results in corrupted
sorting in Navbar (ToC).
- Fixed titles and file names.
- Changed text formats to the consistent form
- Redirected renamed files in the `Vercel.json`
2024-02-07 22:09:34 -05:00
Luiz Ferreira
34d2daffb3 community[patch]: Fix chat openai unit test (#17124)
- **Description:** 
Actually the test named `test_openai_apredict` isn't testing the
apredict method from ChatOpenAI.
  - **Twitter handle:**
  https://twitter.com/OAlmofadas
2024-02-07 22:08:26 -05:00
Dmitry Kankalovich
f92738a6f6 langchain[minor], community[minor], core[minor]: Async Cache support and AsyncRedisCache (#15817)
* This PR adds async methods to the LLM cache. 
* Adds an implementation using Redis called AsyncRedisCache.
* Adds a docker compose file at the /docker to help spin up docker
* Updates redis tests to use a context manager so flushing always happens by default
2024-02-07 22:06:09 -05:00
Harrison Chase
19546081c6 templates: add gemini functions agent (#17141)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-07 17:27:01 -08:00
Bagatur
aeb6b38901 docs: cleanup fleet integration (#17214)
Causing search issues
2024-02-07 17:18:48 -08:00
Erick Friis
4153837502 google-genai[patch]: release 0.0.7 (#17193) 2024-02-07 17:15:09 -08:00
Erick Friis
927ab77d6e google-genai[patch]: no error for FunctionMessage (#17215)
Both should eventually match this:
https://github.com/langchain-ai/langchain/blob/master/libs/partners/google-vertexai/langchain_google_vertexai/chat_models.py#L179

But seems undocumented / can't find types in genai package
2024-02-07 17:14:50 -08:00
Erick Friis
2ecf318218 google-genai[patch]: match function call interface (#17213)
should match vertex
2024-02-07 17:07:31 -08:00
Erick Friis
e17173c403 google-vertexai[patch]: function calling integration test (#17209) 2024-02-07 15:49:56 -08:00
Erick Friis
52be84a603 google-vertexai[patch]: serializable citation metadata, release 0.0.4 (#17145)
was breaking in langserve before
2024-02-07 15:47:32 -08:00
Nuno Campos
19ff81e74f Fix stream events/log with some kinds of non addable output (#17205)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-02-07 15:46:13 -08:00
Bagatur
6f1403b9b6 community[patch]: Release 0.0.19 (#17207)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-07 15:37:01 -08:00
Erick Friis
a13dc47a08 cli[patch]: copyright 2024 default (#17204) 2024-02-07 14:52:37 -08:00
Bagatur
00757567ba core[patch]: Release 0.1.21 (#17202) 2024-02-07 14:20:20 -08:00
Bagatur
af74301ab9 core[patch], community[patch]: link extraction continue on failure (#17200) 2024-02-07 14:15:30 -08:00
Henry
2281f00198 langchain: Standardize output_parser.py across all agent types for custom FORMAT_INSTRUCTIONS (#17168)
- **Description:** 
This PR standardizes the `output_parser.py` file across all agent types
to ensure a uniform parsing mechanism is implemented. It introduces a
cohesive structure and common interface for output parsing, facilitating
easier modifications and extensions by users. The standardized approach
enhances maintainability and scalability of the codebase by providing a
consistent pattern for output parsing, which can be easily understood
and utilized across different agent types.

This PR builds upon the foundation set by a previously merged PR, which
focused exclusively on standardizing the `output_parser.py` for the
`conversational_agent` ([PR
#16945](https://github.com/langchain-ai/langchain/pull/16945)). With
this new update, I extend the standardization efforts to encompass
`output_parser.py` files across all agent types. This enhancement not
only unifies the parsing mechanism across the board but also introduces
the flexibility for users to incorporate custom `FORMAT_INSTRUCTIONS`.

  - **Issue:** 
https://github.com/langchain-ai/langchain/issues/10721
https://github.com/langchain-ai/langchain/issues/4044

  - **Dependencies:**
No new dependencies required for this change

  - **Twitter handle:**
With my github user is enough. Thanks

I hope you accept my PR.
2024-02-07 13:46:17 -08:00
Erick Friis
1cf5a5858f remove pg_essay.txt (#17198)
Added in #16159
2024-02-07 12:58:01 -08:00
Tomaz Bratanic
ecf8042a10 templates: Add neo4j semantic layer with ollama template (#17192)
A template with JSON-based agent using Mixtral via Ollama.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-07 12:50:54 -08:00
Erick Friis
f87acf0340 infra: better conditional (#17197) 2024-02-07 12:49:02 -08:00
Erick Friis
4ae91733aa infra: fix core release (#17195)
core doesn't have any min deps to test
2024-02-07 12:35:27 -08:00
Bagatur
78409634fe core[patch]: Release 0.1.20 (#17194) 2024-02-07 12:28:05 -08:00
Nuno Campos
65798289a4 core[minor]: Use batched tracing in sdk (#16305)
Remove threadpool executor usage in langchain tracer, this is now
handled by sdk
2024-02-07 12:10:58 -08:00
chyroc
f87b38a559 google-genai[minor]: support functions call (#15146)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-07 12:09:30 -08:00
Tomaz Bratanic
302989a2b1 allow optional newline in the action responses of JSON Agent parser (#17186)
Based on my experiments, the newline isn't always there, so we can make
the regex slightly more robust by allowing an optional newline after the
bacticks
2024-02-07 10:26:14 -08:00
William FH
9fa07076da Add trace_as_chain_group metadata (#17187) 2024-02-07 09:42:44 -08:00
Leonid Ganeline
5ceaf784f3 docs Integraions/Components menu reordered (#17151)
This PR is opinionated.
- Moved `Embedding models` item to place after `LLMs` and `Chat model`,
so all items with models are together.
- Renamed `Text embedding models` to `Embedding models`. Now, it is
shorter and easier to read. `Text` is obvious from context. The same as
the `Text LLMs` vs. `LLMs` (we also have multi-modal LLMs).
2024-02-06 20:33:41 -08:00
Leonid Ganeline
0af0fc5d25 docs integraions/providers nav fix (#17148)
Issue: `Provides` page is presented as the index page (on the
`Providers` item) and as the `Providers/Providers` item. The latter
should not be in the menu. See the picture.

![image](https://github.com/langchain-ai/langchain/assets/2256422/6894023f-f13a-4f0d-8fe2-ed5b0ae2bdd2)
This PR fixes this.
2024-02-06 20:33:14 -08:00
Leonid Ganeline
bf55279d39 docs: tutorials update (#17132)
Added the course and the one-pager links
2024-02-06 20:30:30 -08:00
Erick Friis
f499a222de infra: release min version debugging 2 (#17152) 2024-02-06 18:20:19 -08:00
Erick Friis
deb02de051 infra: release min version debugging (#17150) 2024-02-06 18:10:37 -08:00
Erick Friis
9710346095 infra: poetry run min versions 2 (#17149) 2024-02-06 17:57:43 -08:00
Erick Friis
181a033226 infra: poetry run min versions (#17146)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-02-06 17:37:36 -08:00
Erick Friis
d397721a34 docs: format (#17143) 2024-02-06 16:32:53 -08:00
Erick Friis
2187268208 infra: fix release (#17142) 2024-02-06 16:22:20 -08:00
Erick Friis
3e58df43c2 mistralai[patch]: release 0.0.4 (#17139) 2024-02-06 16:05:20 -08:00
Erick Friis
22b6a03a28 infra: read min versions (#17135) 2024-02-06 16:05:11 -08:00
Erick Friis
f881a3330c mistralai[patch]: 16k token batching logic embed (#17136) 2024-02-06 15:59:08 -08:00
Arno Schutijzer
863f96b2e0 docs: fix typo in ollama notebook (#17127)
- **Description:** typo fix in ollama notebook
2024-02-06 16:54:40 -05:00
Leonid Ganeline
42c812a549 API References sorted Partner libs menu (#17130)
The `Partner libs` menu is not sorted. Now it is long enough, and items
should be sorted to simplify a package search.
- Sorted items in the `Partner libs` menu
2024-02-06 16:49:23 -05:00
Bagatur
226f376d59 community[patch]: Release 0.0.18 (#17129)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-06 13:40:00 -08:00
Erick Friis
37062549f9 infra: update to cache v4 (#17126)
stop using nodejs 16. Use 20 (stop deprecation annotation on all ci)

Changelog: https://github.com/actions/cache?tab=readme-ov-file#whats-new
2024-02-06 12:55:01 -08:00
Erick Friis
980e30c361 nvidia-ai-endpoints[patch]: release 0.0.2 (#17125) 2024-02-06 12:48:25 -08:00
Erick Friis
15bd1154a7 pinecone[patch]: integration test new namespace (#17121) 2024-02-06 11:56:00 -08:00
Erick Friis
3ccffa5dcc infra: add integration deps to partner lint (#17122) 2024-02-06 11:51:04 -08:00
Mikhail Khludnev
14ff1438e6 nvidia-trt[patch]: propagate InferenceClientException to the caller. (#16936)
- **Description:**  
 
before the change I've got

1. propagate InferenceClientException to the caller.
2. stop grpc receiver thread on exception 

```
        for token in result_queue:
>           result_str += token
E           TypeError: can only concatenate str (not "InferenceServerException") to str

../../langchain_nvidia_trt/llms.py:207: TypeError
```
And stream thread keeps running. 

after the change request thread stops correctly and caller got a root
cause exception:

```
E                   tritonclient.utils.InferenceServerException: [request id: 4529729] expected number of inputs between 2 and 3 but got 10 inputs for model 'vllm_model'

../../langchain_nvidia_trt/llms.py:205: InferenceServerException
```

  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
  - **Twitter handle:** [t.me/mkhl_spb](https://t.me/mkhl_spb)
 
I'm not sure about test coverage. Should I setup deep mocks or there's a
kind of triton stub via testcontainers or so.
2024-02-06 11:47:07 -08:00
Erick Friis
6af912d7e0 infra: add pinecone secret (#17120) 2024-02-06 11:27:04 -08:00
Junyoung Park
1ed73f1992 community[minor]: Add SelfQueryRetriever support to PGVector (#16991)
- **Description:** Add SelfQueryRetriever support to PGVector
  - **Issue:** -
  - **Dependencies:** -
  - **Twitter handle:** -

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-06 10:50:50 -08:00
Bagatur
cd945e3a5b core[patch]: Release 0.1.19 (#17117) 2024-02-06 09:54:22 -08:00
Frank
ef082c77b1 community[minor]: add github file loader to load any github file content b… (#15305)
### Description
support load any github file content based on file extension.  

Why not use [git
loader](https://python.langchain.com/docs/integrations/document_loaders/git#load-existing-repository-from-disk)
?
git loader clones the whole repo even only interested part of files,
that's too heavy. This GithubFileLoader only downloads that you are
interested files.

### Twitter handle
my twitter: @shufanhaotop

---------

Co-authored-by: Hao Fan <h_fan@apple.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-06 09:42:33 -08:00
老阿張
ac662b3698 docs: Fix typo in amadeus.ipynb (#16916)
Description: "enviornment should be  environment"? 🤔
Issue: Typo
Dependencies: Nope
Twitter handle: laoazhang
2024-02-06 09:42:05 -08:00
Henry
eaeb8a5f71 langchain[patch]: output_parser.py in conversation_chat is customizable (#16945)
**Description:**
With this modification, users can customize the `FORMAT_INSTRUCTIONS`
template, allowing them to create their own prompts

As it is happening in
[this](https://github.com/langchain-ai/langchain/issues/10721) issue,
the `FORMAT_INSTRUCTIONS` is not customizable for the output parser,
unless you create your own class `ConvoOutputParser`. To avoid this, a
modification was done, creating a `format_instruction` variable that
users can customize with ease after initialize the agent.

For example:
```
agent = initialize_agent(
    agent = AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
    tools = tools,
    llm = llm_agent,
    verbose = True,
    max_iterations = 3,
    early_stopping_method = 'generate',
    memory = b_w_memory,
    handle_parsing_errors = True,
    agent_kwargs={
        'system_message':PREFIX,
        'human_message':SUFFIX,
        'template_tool_response':TEMPLATE_TOOL_RESPONSE,
        }
)
agent.agent.output_parser.format_instructions = "MY CUSTOM FORMAT INSTRUCTIONS"
print(agent.agent.output_parser.get_format_instructions())
MY CUSTOM FORMAT INSTRUCTIONS
```

Other parameters like `system_message`, `human_message`, or
`template_tool_response` are already customizable and with this PR, the
last parameter `FORMAT_INSTRUCTIONS` in
`langchain.agents.conversational_chat.prompt` can be modified.


**Issue:**
https://github.com/langchain-ai/langchain/issues/10721

**Dependencies:**
No new dependencies required for this change

**Twitter handle:**
With my github user is enough. Thanks

I hope you accept my PR.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-06 09:41:53 -08:00
Ryan Kraus
f027696b5f community: Added new Utility runnables for NVIDIA Riva. (#15966)
**Please tag this issue with `nvidia_genai`**

- **Description:** Added new Runnables for integration NVIDIA Riva into
LCEL chains for Automatic Speech Recognition (ASR) and Text To Speech
(TTS).
- **Issue:** N/A
- **Dependencies:** To use these runnables, the NVIDIA Riva client
libraries are required. It they are not installed, an error will be
raised instructing how to install them. The Runnables can be safely
imported without the riva client libraries.
- **Twitter handle:** N/A

All of the Riva Runnables are inside a single folder in the Utilities
module. In this folder are four files:
- common.py - Contains all code that is common to both TTS and ASR
- stream.py - Contains a class representing an audio stream that allows
the end user to put data into the stream like a queue.
- asr.py - Contains the RivaASR runnable
- tts.py - Contains the RivaTTS runnable

The following Python function is an example of creating a chain that
makes use of both of these Runnables:

```python
def create(
    config: Configuration,
    audio_encoding: RivaAudioEncoding,
    sample_rate: int,
    audio_channels: int = 1,
) -> Runnable[ASRInputType, TTSOutputType]:
    """Create a new instance of the chain."""
    _LOGGER.info("Instantiating the chain.")

    # create the riva asr client
    riva_asr = RivaASR(
        url=str(config.riva_asr.service.url),
        ssl_cert=config.riva_asr.service.ssl_cert,
        encoding=audio_encoding,
        audio_channel_count=audio_channels,
        sample_rate_hertz=sample_rate,
        profanity_filter=config.riva_asr.profanity_filter,
        enable_automatic_punctuation=config.riva_asr.enable_automatic_punctuation,
        language_code=config.riva_asr.language_code,
    )

    # create the prompt template
    prompt = PromptTemplate.from_template("{user_input}")

    # model = ChatOpenAI()
    model = ChatNVIDIA(model="mixtral_8x7b")  # type: ignore

    # create the riva tts client
    riva_tts = RivaTTS(
        url=str(config.riva_asr.service.url),
        ssl_cert=config.riva_asr.service.ssl_cert,
        output_directory=config.riva_tts.output_directory,
        language_code=config.riva_tts.language_code,
        voice_name=config.riva_tts.voice_name,
    )

    # construct and return the chain
    return {"user_input": riva_asr} | prompt | model | riva_tts  # type: ignore
```

The following code is an example of creating a new audio stream for
Riva:

```python
input_stream = AudioStream(maxsize=1000)
# Send bytes into the stream
for chunk in audio_chunks:
    await input_stream.aput(chunk)
input_stream.close()
```

The following code is an example of how to execute the chain with
RivaASR and RivaTTS

```python
output_stream = asyncio.Queue()
while not input_stream.complete:
    async for chunk in chain.astream(input_stream):
        output_stream.put(chunk)    
```

Everything should be async safe and thread safe. Audio data can be put
into the input stream while the chain is running without interruptions.

---------

Co-authored-by: Hayden Wolff <hwolff@nvidia.com>
Co-authored-by: Hayden Wolff <hwolff@Haydens-Laptop.local>
Co-authored-by: Hayden Wolff <haydenwolff99@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-05 19:50:50 -08:00
Jan de Boer
2d8015554c docs: Link to Brave Website added (#16958)
**Description:** Link to the Brave Website added to the
`brave-search.ipynb` notebook.
This notebook is shown in the docs as an example for the brave tool.

**Issue:** There was to reference on where / how to get an api key
 
**Dependencies:** none
 
**Twitter handle:** not for this one :)
2024-02-05 18:29:16 -08:00
os1ma
fd88e0f800 docs: update StreamlitCallbackHandler example (#16970)
- **Description:** docs: update StreamlitCallbackHandler example.
  - **Issue:** None
  - **Dependencies:** None

I have updated the example for StreamlitCallbackHandler in the
documentation bellow.
https://python.langchain.com/docs/integrations/callbacks/streamlit

Previously, the example used `initialize_agent`, which has been
deprecated, so I've updated it to use `create_react_agent` instead. Many
langchain users are likely searching examples of combining
`create_react_agent` or `openai_tools_agent_chain` with
StreamlitCallbackHandler. I'm sure this update will be really helpful
for them!

Unfortunately, writing unit tests for this example is difficult, so I
have not written any tests. I have run this code in a standalone Python
script file and ensured it runs correctly.
2024-02-05 18:20:59 -08:00
Marc Mahe
f08a9139d2 docs: update mistral docs for version 0.1+ (#17011)
**Description:**
Updated integration page for mistralai.
2024-02-05 18:03:12 -08:00
François Paupier
929f071513 community[patch]: Fix error in LlamaCpp community LLM with Configurable Fields, 'grammar' custom type not available (#16995)
- **Description:** Ensure the `LlamaGrammar` custom type is always
available when instantiating a `LlamaCpp` LLM
  - **Issue:** #16994 
  - **Dependencies:** None
  - **Twitter handle:** @fpaupier

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-05 17:56:58 -08:00
Leonid Ganeline
563f325034 experimental[patch]: fixed import in experimental (#17078) 2024-02-05 17:47:13 -08:00
Ikko Eltociear Ashimine
5f5f5acbc5 docs: fix typo in dspy.ipynb (#16996)
langugage -> language
2024-02-05 17:31:06 -08:00
Eugene Yurtsev
fbab8baac5 core[patch]: Add astream events config test (#17055)
Verify that astream events propagates config correctly

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-05 17:24:58 -08:00
Eugene Yurtsev
609ea019b2 docs: Update streaming documentation (#17066)
Updating streaming documentation following fix of JSON parser for
streaming json.
2024-02-05 17:24:46 -08:00
Erick Friis
64785822dc templates: bump (#17074) 2024-02-05 17:12:12 -08:00
Scott Nath
10bd901139 infra: add integration_tests and coverage to MAKEFILE (#17053)
- **Description: update community MAKE file** 
    - adds `integration_tests`
    - adds `coverage`

- **Issue:** the issue # it fixes if applicable,
    - moving out of https://github.com/langchain-ai/langchain/pull/17014
- **Dependencies:** n/a
- **Twitter handle:** @scottnath
- **Mastodon handle:** scottnath@mastodon.social

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-05 16:39:55 -08:00
Giulio Zani
9f0b63dba0 experimental[patch]: Fixes issue #17060 (#17062)
As described in issue #17060, in the case in which text has only one
sentence the following function fails. Checking for that and adding a
return case fixed the issue.

```python
    def split_text(self, text: str) -> List[str]:
        """Split text into multiple components."""
        # Splitting the essay on '.', '?', and '!'
        single_sentences_list = re.split(r"(?<=[.?!])\s+", text)
        sentences = [
            {"sentence": x, "index": i} for i, x in enumerate(single_sentences_list)
        ]
        sentences = combine_sentences(sentences)
        embeddings = self.embeddings.embed_documents(
            [x["combined_sentence"] for x in sentences]
        )
        for i, sentence in enumerate(sentences):
            sentence["combined_sentence_embedding"] = embeddings[i]
        distances, sentences = calculate_cosine_distances(sentences)
        start_index = 0

        # Create a list to hold the grouped sentences
        chunks = []
        breakpoint_percentile_threshold = 95
        breakpoint_distance_threshold = np.percentile(
            distances, breakpoint_percentile_threshold
        )  # If you want more chunks, lower the percentile cutoff

        indices_above_thresh = [
            i for i, x in enumerate(distances) if x > breakpoint_distance_threshold
        ]  # The indices of those breakpoints on your list

        # Iterate through the breakpoints to slice the sentences
        for index in indices_above_thresh:
            # The end index is the current breakpoint
            end_index = index

            # Slice the sentence_dicts from the current start index to the end index
            group = sentences[start_index : end_index + 1]
            combined_text = " ".join([d["sentence"] for d in group])
            chunks.append(combined_text)

            # Update the start index for the next group
            start_index = index + 1

        # The last group, if any sentences remain
        if start_index < len(sentences):
            combined_text = " ".join([d["sentence"] for d in sentences[start_index:]])
            chunks.append(combined_text)
        return chunks
```

Co-authored-by: Giulio Zani <salamanderxing@Giulios-MBP.homenet.telecomitalia.it>
2024-02-05 16:18:57 -08:00
Jimmy Moore
912210ac19 core[patch]: fix _sql_record_manager mypy for #17048 (#17073)
- **Description:** Add relevant type annotations for relevant session
and query objects to resolve mypy errors when `# type: ignore` comments
are removed.
  - **Issue:** #17048
  - **Dependencies:** None,
  - **Twitter handle:** [clesiemo3](https://twitter.com/clesiemo3)
 
I attempted to solve the `UpsertionRecord` ignore but it would require
added a deprecated plugin or moving completely to sqlalchemy 2.0+ from
my understanding. I'm assuming this is not something desired at this
point in time.
2024-02-05 16:18:40 -08:00
William FH
3d5e988c55 Add prompt metadata + tags (#17054) 2024-02-05 16:17:31 -08:00
Bagatur
d8f41d0521 docs: add youtube link (#17065) 2024-02-05 16:12:56 -08:00
Bagatur
6e2ed9671f infra: fix breebs test lint (#17075) 2024-02-05 16:09:48 -08:00
T Cramer
cf01fc3790 docs: update parse_partial_json source info (#17036)
- **Description:** Update source-link following recent license update at
open-interpreter project
  - **Issue:** N/A
  - **Dependencies:** None
2024-02-05 15:54:34 -08:00
Harrison Chase
83fbf0e11a docs: add structured tools howto to agents (#15772)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-05 15:53:01 -08:00
Alex Boury
334b6ebdf3 community[minor]: Breebs docs retriever (#16578)
- **Description:** Implementation of breeb retriever with integration
tests ->
libs/community/tests/integration_tests/retrievers/test_breebs.py and
documentation (notebook) ->
docs/docs/integrations/retrievers/breebs.ipynb.
  - **Dependencies:** None
2024-02-05 15:51:08 -08:00
Nova Kwok
eb7b05885f docs: Fix typo in quickstart.ipynb (#16859)
- **Description:** "load HTML **form** web URLs" should be "load HTML
**from** web URLs"? 🤔
  - **Issue:** Typo
  - **Dependencies:** Nope
  - **Twitter handle:** n0vad3v
2024-02-05 15:50:11 -08:00
Shorthills AI
cf0b29b6d2 docs: fixing a minor grammatical mistake (#16931) 2024-02-05 15:49:47 -08:00
Shivani Modi
fcb875629d docs: Updating documentation for Konko provider (#16953)
- **Description:** A small update to the Konko provider documentation.

---------

Co-authored-by: Shivani Modi <shivanimodi@Shivanis-MacBook-Pro.local>
2024-02-05 15:49:13 -08:00
Benjamin Muskalla
973ba0d84b docs: Fix Copilot name (#16956)
The official name is "GitHub Copilot"
2024-02-05 15:48:47 -08:00
IMRAN KHAN
4b17699818 docs: add 2 more tutorials to the list in youtube.mdx (#16998)
- **Description:** add 2 more tutorials to the list in youtube.mdx, 
  - **Twitter handle:** EhThing
2024-02-05 15:48:34 -08:00
Serena Ruan
9b279ac127 community[patch]: MLflow callback update (#16687)
Signed-off-by: Serena Ruan <serena.rxy@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-05 15:46:46 -08:00
Mohammad Mohtashim
3c4b24b69a community[patch]: Fix the _call of HuggingFaceHub (#16891)
Fixed the following identified issue: #16849

@baskaryan
2024-02-05 15:34:42 -08:00
Tyler Titsworth
304f3f5fc1 community[patch]: Add Progress bar to HuggingFaceEmbeddings (#16758)
- **Description:** Adds a function parameter to HuggingFaceEmbeddings
called `show_progress` that enables a `tqdm` progress bar if enabled.
Does not function if `multi_process = True`.
  - **Issue:** n/a
  - **Dependencies:** n/a
2024-02-05 14:33:34 -08:00
Supreet Takkar
ae33979813 community[patch]: Allow adding ARNs as model_id to support Amazon Bedrock custom models (#16800)
- **Description:** Adds an additional class variable to `BedrockBase`
called `provider` that allows sending a model provider such as amazon,
cohere, ai21, etc.
Up until now, the model provider is extracted from the `model_id` using
the first part before the `.`, such as `amazon` for
`amazon.titan-text-express-v1` (see [supported list of Bedrock model IDs
here](https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids-arns.html)).
But for custom Bedrock models where the ARN of the provisioned
throughput must be supplied, the `model_id` is like
`arn:aws:bedrock:...` so the `model_id` cannot be extracted from this. A
model `provider` is required by the LangChain Bedrock class to perform
model-based processing. To allow the same processing to be performed for
custom-models of a specific base model type, passing this `provider`
argument can help solve the issues.
The alternative considered here was the use of
`provider.arn:aws:bedrock:...` which then requires ARN to be extracted
and passed separately when invoking the model. The proposed solution
here is simpler and also does not cause issues for current models
already using the Bedrock class.
  - **Issue:** N/A
  - **Dependencies:** N/A

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
2024-02-05 14:28:03 -08:00
T Cramer
e022bfaa7d langchain: add partial parsing support to JsonOutputToolsParser (#17035)
- **Description:** Add partial parsing support to JsonOutputToolsParser
- **Issue:**
[16736](https://github.com/langchain-ai/langchain/issues/16736)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-05 14:18:30 -08:00
calvinweb
dcf973c22c Langchain: json_chat don't need stop sequenes (#16335)
This is a PR about #16334
The Stop sequenes isn't meanful in `json_chat` because it depends json
to work, not completions
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-05 14:18:16 -08:00
Bagatur
66e45e8ab7 community[patch]: chat model mypy fixes (#17061)
Related to #17048
2024-02-05 13:42:59 -08:00
Bagatur
d93de71d08 community[patch]: chat message history mypy fixes (#17059)
Related to #17048
2024-02-05 13:13:25 -08:00
Bagatur
af5ae24af2 community[patch]: callbacks mypy fixes (#17058)
Related to #17048
2024-02-05 12:37:27 -08:00
Vadim Kudlay
75b6fa1134 nvidia-ai-endpoints[patch]: Support User-Agent metadata and minor fixes. (#16942)
- **Description:** Several meta/usability updates, including User-Agent.
  - **Issue:** 
- User-Agent metadata for tracking connector engagement. @milesial
please check and advise.
- Better error messages. Tries harder to find a request ID. @milesial
requested.
- Client-side image resizing for multimodal models. Hope to upgrade to
Assets API solution in around a month.
- `client.payload_fn` allows you to modify payload before network
request. Use-case shown in doc notebook for kosmos_2.
- `client.last_inputs` put back in to allow for advanced
support/debugging.
  - **Dependencies:** 
- Attempts to pull in PIL for image resizing. If not installed, prints
out "please install" message, warns it might fail, and then tries
without resizing. We are waiting on a more permanent solution.

For LC viz: @hinthornw 
For NV viz: @fciannella @milesial @vinaybagade

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-05 12:24:53 -08:00
Nuno Campos
ae56fd020a Fix condition on custom root type in runnable history (#17017)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-02-05 12:15:11 -08:00
Nuno Campos
f0ffebb944 Shield callback methods from cancellation: Fix interrupted runs marked as pending forever (#17010)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-02-05 12:09:47 -08:00
Bagatur
e7b3290d30 community[patch]: fix agent_toolkits mypy (#17050)
Related to #17048
2024-02-05 11:56:24 -08:00
Erick Friis
6ffd5b15bc pinecone: init pkg (#16556)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-02-05 11:55:01 -08:00
Erick Friis
1183769cf7 template: tool-retrieval-fireworks (#17052)
- Initial commit oss-tool-retrieval-agent
- README update
- lint
- lock
- format imports
- Rename to retrieval-agent-fireworks
- cr

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
2024-02-05 11:50:17 -08:00
Harrison Chase
4eda647fdd infra: add -p to mkdir in lint steps (#17013)
Previously, if this did not find a mypy cache then it wouldnt run

this makes it always run

adding mypy ignore comments with existing uncaught issues to unblock other prs

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-02-05 11:22:06 -08:00
Erick Friis
db6af21395 docs: exa contents (#16555) 2024-02-05 11:15:06 -08:00
Eugene Yurtsev
fb245451d2 core[patch]: Add langsmith to printed sys information (#16899) 2024-02-05 11:13:30 -08:00
Mikhail Khludnev
2145636f1d Nvidia trt model name for stop_stream() (#16997)
just removing some legacy leftover.
2024-02-05 10:45:06 -08:00
Christophe Bornet
2ef69fe11b Add async methods to BaseChatMessageHistory and BaseMemory (#16728)
Adds:
   * async methods to BaseChatMessageHistory
   * async methods to ChatMessageHistory
   * async methods to BaseMemory
   * async methods to BaseChatMemory
   * async methods to ConversationBufferMemory
   * tests of ConversationBufferMemory's async methods

  **Twitter handle:** cbornet_
2024-02-05 13:20:28 -05:00
Ryan Kraus
b3c3b58f2c core[patch]: Fixed bug in dict to message conversion. (#17023)
- **Description**: We discovered a bug converting dictionaries to
messages where the ChatMessageChunk message type isn't handled. This PR
adds support for that message type.
- **Issue**: #17022 
- **Dependencies**: None
- **Twitter handle**: None
2024-02-05 10:13:25 -08:00
Nicolas Grenié
54fcd476bb docs: Update ollama examples with new community libraries (#17007)
- **Description:** Updating one line code sample for Ollama with new
**langchain_community** package
  - **Issue:**
  - **Dependencies:** none
  - **Twitter handle:**  @picsoung
2024-02-04 15:13:29 -08:00
Killinsun - Ryota Takeuchi
bcfce146d8 community[patch]: Correct the calling to collection_name in qdrant (#16920)
## Description

In #16608, the calling `collection_name` was wrong.
I made a fix for it. 
Sorry for the inconvenience!

## Issue

https://github.com/langchain-ai/langchain/issues/16962

## Dependencies

N/A



<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Kumar Shivendu <kshivendu1@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-04 10:45:35 -08:00
Erick Friis
849051102a google-genai[patch]: fix new core typing (#16988) 2024-02-03 17:45:44 -08:00
Bagatur
35446c814e openai[patch]: rm tiktoken model warning (#16964) 2024-02-03 16:36:57 -08:00
ccurme
0826d87ecd langchain_mistralai[patch]: Invoke callback prior to yielding token (#16986)
- **Description:** Invoke callback prior to yielding token in stream and
astream methods for ChatMistralAI.
- **Issue:** https://github.com/langchain-ai/langchain/issues/16913
2024-02-03 16:30:50 -08:00
Bagatur
267e71606e docs: Update README.md (#16966) 2024-02-02 16:50:58 -08:00
Erick Friis
2b7e47a668 infra: install integration deps for test linting (#16963)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-02-02 15:59:10 -08:00
Erick Friis
afdd636999 docs: partner packages (#16960) 2024-02-02 15:12:21 -08:00
Erick Friis
06660bc78c core[patch]: handle some optional cases in tools (#16954)
primary problem in pydantic still exists, where `Optional[str]` gets
turned to `string` in the jsonschema `.schema()`

Also fixes the `SchemaSchema` naming issue

---------

Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
2024-02-02 15:05:54 -08:00
Mohammad Mohtashim
f8943e8739 core[patch]: Add doc-string to RunnableEach (#16892)
Add doc-string to Runnable Each
---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-02-02 14:11:09 -08:00
Ashley Xu
66adb95284 docs: BigQuery Vector Search went public review and updated docs (#16896)
Update the docs for BigQuery Vector Search
2024-02-02 10:26:44 -08:00
Massimiliano Pronesti
71f9ea33b6 docs: add quantization to vllm and update API (#16950)
- **Description:** Update vLLM docs to include instructions on how to
use quantized models, as well as to replace the deprecated methods.
2024-02-02 10:24:49 -08:00
Bagatur
2a510c71a0 core[patch]: doc init positional args (#16854) 2024-02-02 10:24:16 -08:00
Bagatur
d80c612c92 core[patch]: Message content as positional arg (#16921) 2024-02-02 10:24:02 -08:00
Bagatur
c29e9b6412 core[patch]: fix chat prompt partial messages placeholder var (#16918) 2024-02-02 10:23:37 -08:00
Radhakrishnan
3b0fa9079d docs: Updated integration doc for aleph alpha (#16844)
Description: Updated doc for llm/aleph_alpha with new functions: invoke.
Changed structure of the document to match the required one.
Issue: https://github.com/langchain-ai/langchain/issues/15664
Dependencies: None
Twitter handle: None

---------

Co-authored-by: Radhakrishnan Iyer <radhakrishnan.iyer@ibm.com>
2024-02-02 09:28:06 -08:00
hmasdev
cc17334473 core[minor]: add validation error handler to BaseTool (#14007)
- **Description:** add a ValidationError handler as a field of
[`BaseTool`](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools.py#L101)
and add unit tests for the code change.
- **Issue:** #12721 #13662
- **Dependencies:** None
- **Tag maintainer:** 
- **Twitter handle:** @hmdev3
- **NOTE:**
  - I'm wondering if the update of document is required.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-01 20:09:19 -08:00
William FH
bdacfafa05 core[patch]: Remove deep copying of run prior to submitting it to LangChain Tracing (#16904) 2024-02-01 18:46:05 -08:00
William FH
e02efd513f core[patch]: Hide aliases when serializing (#16888)
Currently, if you dump an object initialized with an alias, we'll still
dump the secret values since they're retained in the kwargs
2024-02-01 17:55:37 -08:00
William FH
131c043864 Fix loading of ImagePromptTemplate (#16868)
We didn't override the namespace of the ImagePromptTemplate, so it is
listed as being in langchain.schema

This updates the mapping to let the loader deserialize.

Alternatively, we could make a slight breaking change and update the
namespace of the ImagePromptTemplate since we haven't broadly
publicized/documented it yet..
2024-02-01 17:54:04 -08:00
Erick Friis
6fc2835255 docs: fix broken links (#16855) 2024-02-01 17:29:38 -08:00
Eugene Yurtsev
a265878d71 langchain_openai[patch]: Invoke callback prior to yielding token (#16909)
All models should be calling the callback for new token prior to
yielding the token.

Not doing this can cause callbacks for downstream steps to be called
prior to the callback for the new token; causing issues in
astream_events APIs and other things that depend in callback ordering
being correct.

We need to make this change for all chat models.
2024-02-01 16:43:10 -08:00
Erick Friis
b1a847366c community: revert SQL Stores (#16912)
This reverts commit cfc225ecb3.


https://github.com/langchain-ai/langchain/pull/15909#issuecomment-1922418097

These will have existed in langchain-community 0.0.16 and 0.0.17.
2024-02-01 16:37:40 -08:00
akira wu
f7c709b40e doc: fix typo in message_history.ipynb (#16877)
- **Description:** just fixed a small typo in the documentation in the
`expression_language/how_to/message_history` session
[here](https://python.langchain.com/docs/expression_language/how_to/message_history)
2024-02-01 13:30:29 -08:00
Leonid Ganeline
c2ca6612fe refactor langchain.prompts.example_selector (#15369)
The `langchain.prompts.example_selector` [still holds several
artifacts](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.prompts)
that belongs to `community`. If they moved to
`langchain_community.example_selectors`, the `langchain.prompts`
namespace would be effectively removed which is great.
- moved a class and afunction to `langchain_community`

Note:
- Previously, the `langchain.prompts.example_selector` artifacts were
moved into the `langchain_core.exampe_selectors`. See the flattened
namespace (`.prompts` was removed)!
Similar flattening was implemented for the `langchain_core` as the
`langchain_core.exampe_selectors`.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-01 12:05:57 -08:00
Erick Friis
13a6756067 infra: ci naming 2 (#16893) 2024-02-01 11:39:00 -08:00
Lance Martin
b1e7130d8a Minor update to Nomic cookbook (#16886)
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-02-01 11:28:58 -08:00
Shorthills AI
0bca0f4c24 Docs: Fixed grammatical mistake (#16858)
Co-authored-by: Vishal <141389263+VishalYadavShorthillsAI@users.noreply.github.com>
Co-authored-by: Sanskar Tanwar <142409040+SanskarTanwarShorthillsAI@users.noreply.github.com>
Co-authored-by: UpneetShorthillsAI <144228282+UpneetShorthillsAI@users.noreply.github.com>
Co-authored-by: HarshGuptaShorthillsAI <144897987+HarshGuptaShorthillsAI@users.noreply.github.com>
Co-authored-by: AdityaKalraShorthillsAI <143726711+AdityaKalraShorthillsAI@users.noreply.github.com>
Co-authored-by: SakshiShorthillsAI <144228183+SakshiShorthillsAI@users.noreply.github.com>
Co-authored-by: AashiGuptaShorthillsAI <144897730+AashiGuptaShorthillsAI@users.noreply.github.com>
Co-authored-by: ShamshadAhmedShorthillsAI <144897733+ShamshadAhmedShorthillsAI@users.noreply.github.com>
Co-authored-by: ManpreetShorthillsAI <142380984+ManpreetShorthillsAI@users.noreply.github.com>
Co-authored-by: Aayush <142384656+AayushShorthillsAI@users.noreply.github.com>
Co-authored-by: BajrangBishnoiShorthillsAi <148060486+BajrangBishnoiShorthillsAi@users.noreply.github.com>
2024-02-01 11:28:15 -08:00
Erick Friis
5b3fc86cfd infra: ci naming (#16890)
Make it clearer how to run equivalent commands locally

Not a perfect 1:1, but will help people get started

![Screenshot 2024-02-01 at 10 53
34 AM](https://github.com/langchain-ai/langchain/assets/9557659/da271aaf-d5db-41e3-9379-cb1d8a0232c5)
2024-02-01 11:09:37 -08:00
Qihui Xie
c5b01ac621 community[patch]: support LIKE comparator (full text match) in Qdrant (#12769)
**Description:** 
Support [Qdrant full text match
filtering](https://qdrant.tech/documentation/concepts/filtering/#full-text-match)
by adding Comparator.LIKE to QdrantTranslator.
2024-02-01 11:03:25 -08:00
Christophe Bornet
9d458d089a community: Factorize AstraDB components constructors (#16779)
* Adds `AstraDBEnvironment` class and use it in `AstraDBLoader`,
`AstraDBCache`, `AstraDBSemanticCache`, `AstraDBBaseStore` and
`AstraDBChatMessageHistory`
* Create an `AsyncAstraDB` if we only have an `AstraDB` and vice-versa
so:
  * we always have an instance of `AstraDB`
* we always have an instance of `AsyncAstraDB` for recent versions of
astrapy
* Create collection if not exists in `AstraDBBaseStore`
* Some typing improvements

Note: `AstraDB` `VectorStore` not using `AstraDBEnvironment` at the
moment. This will be done after the `langchain-astradb` package is out.
2024-02-01 10:51:07 -08:00
Harel Gal
93366861c7 docs: Indicated Guardrails for Amazon Bedrock preview status (#16769)
Added notification about limited preview status of Guardrails for Amazon
Bedrock feature to code example.

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
2024-02-01 10:41:48 -08:00
Christophe Bornet
78a1af4848 langchain[patch]: Add async methods to MultiVectorRetriever (#16878)
Adds async support to multi vector retriever
2024-02-01 10:33:06 -08:00
Bagatur
7d03d8f586 docs: fix docstring examples (#16889) 2024-02-01 10:17:26 -08:00
Bagatur
c2d09fb151 infra: bump exp min test reqs (#16884) 2024-02-01 08:35:21 -08:00
Bagatur
65ba5c220b experimental[patch]: Release 0.0.50 (#16883) 2024-02-01 08:27:39 -08:00
Bagatur
9e7d9f9390 infra: bump langchain min test reqs (#16882) 2024-02-01 08:16:30 -08:00
Bagatur
db442c635b langchain[patch]: Release 0.1.5 (#16881) 2024-02-01 08:10:29 -08:00
Bagatur
2b4abed25c commmunity[patch]: Release 0.0.17 (#16871) 2024-02-01 07:33:34 -08:00
Bagatur
bb73251146 core[patch]: Release 0.1.18 (#16870) 2024-02-01 07:33:15 -08:00
Christophe Bornet
a0ec045495 Add async methods to BaseStore (#16669)
- **Description:**

The BaseStore methods are currently blocking. Some implementations
(AstraDBStore, RedisStore) would benefit from having async methods.
Also once we have async methods for BaseStore, we can implement the
async `aembed_documents` in CacheBackedEmbeddings to cache the
embeddings asynchronously.

* adds async methods amget, amset, amedelete and ayield_keys to
BaseStore
  * implements the async methods for InMemoryStore
  * adds tests for InMemoryStore async methods

- **Twitter handle:** cbornet_
2024-01-31 17:10:47 -08:00
Erick Friis
17e886388b nomic: init pkg (#16853)
Co-authored-by: Lance Martin <lance@langchain.dev>
2024-01-31 16:46:35 -08:00
Eugene Yurtsev
2e5949b6f8 core(minor): Add bulk add messages to BaseChatMessageHistory interface (#15709)
* Add bulk add_messages method to the interface.
* Update documentation for add_ai_message and add_human_message to
denote them as being marked for deprecation. We should stop using them
as they create more incorrect (inefficient) ways of doing things
2024-01-31 11:59:39 -08:00
Christophe Bornet
af8c5c185b langchain[minor],community[minor]: Add async methods in BaseLoader (#16634)
Adds:
* methods `aload()` and `alazy_load()` to interface `BaseLoader`
* implementation for class `MergedDataLoader `
* support for class `BaseLoader` in async function `aindex()` with unit
tests

Note: this is compatible with existing `aload()` methods that some
loaders already had.

**Twitter handle:** @cbornet_

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-01-31 11:08:11 -08:00
Erick Friis
c37ca45825 nvidia-trt: remove tritonclient all extra dep (#16749) 2024-01-30 16:06:19 -08:00
Erick Friis
36c0392dbe infra: remove unnecessary tests on partner packages (#16808) 2024-01-30 16:01:47 -08:00
Erick Friis
bb3b6bde33 openai[minor]: change to secretstr (#16803) 2024-01-30 15:49:56 -08:00
Raphael
bf9068516e community[minor]: add the ability to load existing transcripts from AssemblyAI by their id. (#16051)
- **Description:** the existing AssemblyAI API allows to pass a path or
an url to transcribe an audio file and turn in into Langchain Documents,
this PR allows to get existing transcript by their transcript id and
turn them into Documents.
  - **Issue:** not related to an existing issue
  - **Dependencies:** requests

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-30 13:47:45 -08:00
Bagatur
daf820c77b community[patch]: undo create_sql_agent breaking (#16797) 2024-01-30 10:00:52 -08:00
Eugene Yurtsev
ef2bd745cb docs: Update doc-string in base callback managers (#15885)
Update doc-strings with a comment about on_llm_start vs.
on_chat_model_start.
2024-01-30 09:51:45 -08:00
William FH
881dc28d2c Fix Dep Recommendation (#16793)
Tools are different than functions
2024-01-30 09:40:28 -08:00
Bagatur
b0347f3e2b docs: add csv use case (#16756) 2024-01-30 09:39:46 -08:00
Alexander Conway
4acd2654a3 Report which file was errored on in DirectoryLoader (#16790)
The current implementation leaves it up to the particular file loader
implementation to report the file on which an error was encountered - in
my case pdfminer was simply saying it could not parse a file as a PDF,
but I didn't know which of my hundreds of files it was failing on.

No reason not to log the particular item on which an error was
encountered, and it should be an immense debugging assistant.

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-01-30 09:14:58 -08:00
Erick Friis
a372b23675 robocorp: release 0.0.3 (#16789) 2024-01-30 07:15:25 -08:00
Rihards Gravis
442fa52b30 [partners]: langchain-robocorp ease dependency version (#16765) 2024-01-30 08:13:54 -07:00
Jacob Lee
c6724a39f4 Fix rephrase step in chatbot use case (#16763) 2024-01-29 23:25:25 -08:00
Bob Lin
546b757303 community: Add ChatGLM3 (#15265)
Add [ChatGLM3](https://github.com/THUDM/ChatGLM3) and updated
[chatglm.ipynb](https://python.langchain.com/docs/integrations/llms/chatglm)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-29 20:30:52 -08:00
Marina Pliusnina
a1ce7ab672 adding parameter for changing the language in SpacyEmbeddings (#15743)
Description: Added the parameter for a possibility to change a language
model in SpacyEmbeddings. The default value is still the same:
"en_core_web_sm", so it shouldn't affect a code which previously did not
specify this parameter, but it is not hard-coded anymore and easy to
change in case you want to use it with other languages or models.

Issue: At Barcelona Supercomputing Center in Aina project
(https://github.com/projecte-aina), a project for Catalan Language
Models and Resources, we would like to use Langchain for one of our
current projects and we would like to comment that Langchain, while
being a very powerful and useful open-source tool, is pretty much
focused on English language. We would like to contribute to make it a
bit more adaptable for using with other languages.

Dependencies: This change requires the Spacy library and a language
model, specified in the model parameter.

Tag maintainer: @dev2049

Twitter handle: @projecte_aina

---------

Co-authored-by: Marina Pliusnina <marina.pliusnina@bsc.es>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-29 20:30:34 -08:00
Christophe Bornet
744070ee85 Add async methods for the AstraDB VectorStore (#16391)
- **Description**: fully async versions are available for astrapy 0.7+.
For older astrapy versions or if the user provides a sync client without
an async one, the async methods will call the sync ones wrapped in
`run_in_executor`
  - **Twitter handle:** cbornet_
2024-01-29 20:22:25 -08:00
baichuan-assistant
f8f2649f12 community: Add Baichuan LLM to community (#16724)
Replace this entire comment with:
- **Description:** Add Baichuan LLM to integration/llm, also updated
related docs.

Co-authored-by: BaiChuanHelper <wintergyc@WinterGYCs-MacBook-Pro.local>
2024-01-29 20:08:24 -08:00
thiswillbeyourgithub
1d082359ee community: add support for callable filters in FAISS (#16190)
- **Description:**
Filtering in a FAISS vectorstores is very inflexible and doesn't allow
that many use case. I think supporting callable like this enables a lot:
regular expressions, condition on multiple keys etc. **Note** I had to
manually alter a test. I don't understand if it was falty to begin with
or if there is something funky going on.
- **Issue:** None
- **Dependencies:** None
- **Twitter handle:** None

Signed-off-by: thiswillbeyourgithub <26625900+thiswillbeyourgithub@users.noreply.github.com>
2024-01-29 20:05:56 -08:00
Yudhajit Sinha
1703fe2361 core[patch]: preserve inspect.iscoroutinefunction with @beta decorator (#16440)
Adjusted deprecate decorator to make sure decorated async functions are
still recognized as "coroutinefunction" by inspect

Addresses #16402

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-29 20:01:11 -08:00
Killinsun - Ryota Takeuchi
52f4ad8216 community: Add new fields in metadata for qdrant vector store (#16608)
## Description

The PR is to return the ID and collection name from qdrant client to
metadata field in `Document` class.

## Issue

The motivation is almost same to
[11592](https://github.com/langchain-ai/langchain/issues/11592)

Returning ID is useful to update existing records in a vector store, but
we cannot know them if we use some retrievers.

In order to avoid any conflicts, breaking changes, the new fields in
metadata have a prefix `_`

## Dependencies

N/A

## Twitter handle

@kill_in_sun

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-01-29 19:59:54 -08:00
hulitaitai
32cad38ec6 <langchain_community\llms\chatglm.py>: <Correcting "history"> (#16729)
Use the real "history" provided by the original program instead of
putting "None" in the history.

- **Description:** I change one line in the code to make it return the
"history" of the chat model.
- **Issue:** At the moment it returns only the answers of the chat
model. However the chat model himself provides a history more complet
with the questions of the user.
  - **Dependencies:** no dependencies required for this change,
2024-01-29 19:50:31 -08:00
Jacob Lee
4a027e622f docs[patch]: Lower temperature in chatbot usecase notebooks for consistency (#16750)
CC @baskaryan
2024-01-29 17:27:13 -08:00
Jacob Lee
12d2b2ebcf docs[minor]: LCEL rewrite of chatbot use-case (#16414)
CC @baskaryan @hwchase17

TODO:
- [x] Draft of main quickstart
- [x] Index intro page
- [x] Add subpage guide for Memory management
- [x] Add subpage guide for Retrieval
- [x] Add subpage guide for Tool usage
- [x] Add LangSmith traces illustrating query transformation
2024-01-29 17:08:54 -08:00
Bassem Yacoube
85e93e05ed community[minor]: Update OctoAI LLM, Embedding and documentation (#16710)
This PR includes updates for OctoAI integrations:
- The LLM class was updated to fix a bug that occurs with multiple
sequential calls
- The Embedding class was updated to support the new GTE-Large endpoint
released on OctoAI lately
- The documentation jupyter notebook was updated to reflect using the
new LLM sdk
Thank you!
2024-01-29 13:57:17 -08:00
Hank
6d6226d96d docs: Remove accidental extra ``` in QuickStart doc. (#16740)
Description: One too many set of triple-ticks in a sample code block in
the QuickStart doc was causing "\`\`\`shell" to appear in the shell
command that was being demonstrated. I just deleted the extra "```".
Issue: Didn't see one
Dependencies: None
2024-01-29 13:55:26 -08:00
Shay Ben Elazar
84ebfb5b9d openai[patch]: Added annotations support to azure openai (#13704)
- **Description:** Added Azure OpenAI Annotations (content filtering
results) to ChatResult

  - **Issue:** 13090

  - **Twitter handle:** ElazarShay

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-29 13:31:09 -08:00
Volodymyr Machula
32c5be8b73 community[minor]: Connery Tool and Toolkit (#14506)
## Summary

This PR implements the "Connery Action Tool" and "Connery Toolkit".
Using them, you can integrate Connery actions into your LangChain agents
and chains.

Connery is an open-source plugin infrastructure for AI.

With Connery, you can easily create a custom plugin with a set of
actions and seamlessly integrate them into your LangChain agents and
chains. Connery will handle the rest: runtime, authorization, secret
management, access management, audit logs, and other vital features.
Additionally, Connery and our community offer a wide range of
ready-to-use open-source plugins for your convenience.

Learn more about Connery:

- GitHub: https://github.com/connery-io/connery-platform
- Documentation: https://docs.connery.io
- Twitter: https://twitter.com/connery_io

## TODOs

- [x] API wrapper
   - [x] Integration tests
- [x] Connery Action Tool
   - [x] Docs
   - [x] Example
   - [x] Integration tests
- [x] Connery Toolkit
  - [x] Docs
  - [x] Example
- [x] Formatting (`make format`)
- [x] Linting (`make lint`)
- [x] Testing (`make test`)
2024-01-29 12:45:03 -08:00
Harrison Chase
8457c31c04 community[patch]: activeloop ai tql deprecation (#14634)
Co-authored-by: AdkSarsen <adilkhan@activeloop.ai>
2024-01-29 12:43:54 -08:00
Neli Hateva
c95facc293 langchain[minor], community[minor]: Implement Ontotext GraphDB QA Chain (#16019)
- **Description:** Implement Ontotext GraphDB QA Chain
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** @OntotextGraphDB
2024-01-29 12:25:53 -08:00
chyroc
a08f9a7ff9 langchain[patch]: support OpenAIAssistantRunnable async (#15302)
fix https://github.com/langchain-ai/langchain/issues/15299

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-29 12:19:47 -08:00
Elliot
39eb00d304 community[patch]: Adapt more parameters related to MemorySearchPayload for the search method of ZepChatMessageHistory (#15441)
- **Description:** To adapt more parameters related to
MemorySearchPayload for the search method of ZepChatMessageHistory,
  - **Issue:** None,
  - **Dependencies:** None,
  - **Twitter handle:** None
2024-01-29 11:45:55 -08:00
Kirushikesh DB
47bd58dc11 docs: Added illustration of using RetryOutputParser with LLMChain (#16722)
**Description:**
Updated the retry.ipynb notebook, it contains the illustrations of
RetryOutputParser in LangChain. But the notebook lacks to explain the
compatibility of RetryOutputParser with existing chains. This changes
adds some code to illustrate the workflow of using RetryOutputParser
with the user chain.

Changes:
1. Changed RetryWithErrorOutputParser with RetryOutputParser, as the
markdown text says so.
2. Added code at the last of the notebook to define a chain which passes
the LLM completions to the retry parser, which can be customised for
user needs.

**Issue:** 
Since RetryOutputParser/RetryWithErrorOutputParser does not implement
the parse function it cannot be used with LLMChain directly like
[this](https://python.langchain.com/docs/expression_language/cookbook/prompt_llm_parser#prompttemplate-llm-outputparser).
This also raised various issues #15133 #12175 #11719 still open, instead
of adding new features/code changes its best to explain the "how to
integrate LLMChain with retry parsers" clearly with an example in the
corresponding notebook.

Inspired from:
https://github.com/langchain-ai/langchain/issues/15133#issuecomment-1868972580

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-29 11:24:52 -08:00
Jael Gu
a1aa3a657c community[patch]: Milvus supports add & delete texts by ids (#16256)
# Description

To support [langchain
indexing](https://python.langchain.com/docs/modules/data_connection/indexing)
as requested by users, vectorstore Milvus needs to support:
- document addition by id (`add_documents` method with `ids` argument)
- delete by id (`delete` method with `ids` argument)

Example usage:

```python
from langchain.indexes import SQLRecordManager, index
from langchain.schema import Document
from langchain_community.vectorstores import Milvus
from langchain_openai import OpenAIEmbeddings

collection_name = "test_index"
embedding = OpenAIEmbeddings()
vectorstore = Milvus(embedding_function=embedding, collection_name=collection_name)

namespace = f"milvus/{collection_name}"
record_manager = SQLRecordManager(
    namespace, db_url="sqlite:///record_manager_cache.sql"
)
record_manager.create_schema()

doc1 = Document(page_content="kitty", metadata={"source": "kitty.txt"})
doc2 = Document(page_content="doggy", metadata={"source": "doggy.txt"})

index(
    [doc1, doc1, doc2],
    record_manager,
    vectorstore,
    cleanup="incremental",  # None, "incremental", or "full"
    source_id_key="source",
)
```

# Fix issues

Fix https://github.com/milvus-io/milvus/issues/30112

---------

Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-29 11:19:50 -08:00
Michard Hugo
e9d3527b79 community[patch]: Add missing async similarity_distance_threshold handling in RedisVectorStoreRetriever (#16359)
Add missing async similarity_distance_threshold handling in
RedisVectorStoreRetriever

- **Description:** added method `_aget_relevant_documents` to
`RedisVectorStoreRetriever` that overrides parent method to add support
of `similarity_distance_threshold` in async mode (as for sync mode)
  - **Issue:** #16099
  - **Dependencies:** N/A
  - **Twitter handle:** N/A
2024-01-29 11:19:30 -08:00
Jarod Stewart
7c6a2a8384 templates: Ionic Shopping Assistant (#16648)
- **Description:** This is a template for creating shopping assistant
chat bots
- **Issue:** Example for creating a shopping assistant with OpenAI Tools
Agent
- **Dependencies:** Ionic
https://github.com/ioniccommerce/ionic_langchain
  - **Twitter handle:** @ioniccommerce

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-01-29 11:08:24 -08:00
Bagatur
7237dc67d4 core[patch]: Release 0.1.17 (#16737) 2024-01-29 11:02:29 -08:00
Anthony Bernabeu
2db79ab111 community[patch]: Implement TTL for DynamoDBChatMessageHistory (#15478)
- **Description:** Implement TTL for DynamoDBChatMessageHistory, 
  - **Issue:** see #15477,
  - **Dependencies:** N/A,

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
2024-01-29 10:22:46 -08:00
Massimiliano Pronesti
1bc8d9a943 experimental[patch]: missing resolution strategy in anonymization (#16653)
- **Description:** Presidio-based anonymizers are not working because
`_remove_conflicts_and_get_text_manipulation_data` was being called
without a conflict resolution strategy. This PR fixes this issue. In
addition, it removes some mutable default arguments (antipattern).
 
To reproduce the issue, just run the very first cell of this
[notebook](https://python.langchain.com/docs/guides/privacy/2/) from
langchain's documentation.

<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-01-29 09:56:16 -08:00
Abhinav
8e44363ec9 langchain_community: Update documentation for installing llama-cpp-python on windows (#16666)
**Description** : This PR updates the documentation for installing
llama-cpp-python on Windows.

- Updates install command to support pyproject.toml
- Makes CPU/GPU install instructions clearer
- Adds reinstall with GPU support command

**Issue**: Existing
[documentation](https://python.langchain.com/docs/integrations/llms/llamacpp#compiling-and-installing)
lists the following commands for installing llama-cpp-python
```
python setup.py clean
python setup.py install
````
The current version of the repo does not include a `setup.py` and uses a
`pyproject.toml` instead.
This can be replaced with
```
python -m pip install -e .
```
As explained in
https://github.com/abetlen/llama-cpp-python/issues/965#issuecomment-1837268339
**Dependencies**: None
**Twitter handle**: None

---------

Co-authored-by: blacksmithop <angstycoder101@gmaii.com>
2024-01-29 08:41:29 -08:00
taimo
d3d9244fee langchain-community: fix unicode escaping issue with SlackToolkit (#16616)
- **Description:** fix unicode escaping issue with SlackToolkit
  - **Issue:**  #16610
2024-01-29 08:38:12 -08:00
Benito Geordie
f3fdc5c5da community: Added integrations for ThirdAI's NeuralDB with Retriever and VectorStore frameworks (#15280)
**Description:** Adds ThirdAI NeuralDB retriever and vectorstore
integration. NeuralDB is a CPU-friendly and fine-tunable text retrieval
engine.
2024-01-29 08:35:42 -08:00
Jonathan Bennion
815896ff13 langchain: pubmed tool path update in doc (#16716)
- **Description:** The current pubmed tool documentation is referencing
the path to langchain core not the path to the tool in community. The
old tool redirects anyways, but for efficiency of using the more direct
path, just adding this documentation so it references the new path
  - **Issue:** doesn't fix an issue
  - **Dependencies:** no dependencies
  - **Twitter handle:** rooftopzen
2024-01-29 08:25:29 -08:00
Lance Martin
1bfadecdd2 Update Slack agent toolkit (#16732)
Co-authored-by: taimoOptTech <132860814+taimo3810@users.noreply.github.com>
2024-01-29 08:03:44 -08:00
Pashva Mehta
22d90800c8 community: Fixed schema discrepancy in from_texts function for weaviate vectorstore (#16693)
* Description: Fixed schema discrepancy in **from_texts** function for
weaviate vectorstore which created a redundant property "key" inside a
class.
* Issue: Fixed: https://github.com/langchain-ai/langchain/issues/16692
* Twitter handle: @pashvamehta1
2024-01-28 16:53:31 -08:00
Choi JaeHun
ba70630829 docs: Syntax correction according to langchain version update in 'Retry Parser' tutorial example (#16699)
- **Description:** Syntax correction according to langchain version
update in 'Retry Parser' tutorial example,
- **Issue:** #16698

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-28 16:53:04 -08:00
ccurme
ec0ae23645 core: expand docstring for RunnableGenerator (#16672)
- **Description:** expand docstring for RunnableGenerator
  - **Issue:** https://github.com/langchain-ai/langchain/issues/16631
2024-01-28 16:47:08 -08:00
Bob Lin
0866a984fe Update n_gpu_layers"s description (#16685)
The `n_gpu_layers` parameter in `llama.cpp` supports the use of `-1`,
which means to offload all layers to the GPU, so the document has been
updated.

Ref:
35918873b4/llama_cpp/server/settings.py (L29C22-L29C117)


35918873b4/llama_cpp/llama.py (L125)
2024-01-28 16:46:50 -08:00
Daniel Erenrich
0600998f38 community: Wikidata tool support (#16691)
- **Description:** Adds Wikidata support to langchain. Can read out
documents from Wikidata.
  - **Issue:** N/A
- **Dependencies:** Adds implicit dependencies for
`wikibase-rest-api-client` (for turning items into docs) and
`mediawikiapi` (for hitting the search endpoint)
  - **Twitter handle:** @derenrich

You can see an example of this tool used in a chain
[here](https://nbviewer.org/urls/d.erenrich.net/upload/Wikidata_Langchain.ipynb)
or
[here](https://nbviewer.org/urls/d.erenrich.net/upload/Wikidata_Lars_Kai_Hansen.ipynb)

<!-- Thank you for contributing to LangChain!


Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->
2024-01-28 16:45:21 -08:00
Tze Min
6ef718c5f4 Core: fix Anthropic json issue in streaming (#16670)
**Description:** fix ChatAnthropic json issue in streaming 
**Issue:** https://github.com/langchain-ai/langchain/issues/16423
**Dependencies:** n/a

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-01-28 16:41:17 -08:00
Owen Sims
e451c8adc1 Community: Update Ionic Shopping Docs (#16700)
- **Description:** Update to docs as originally introduced in
https://github.com/langchain-ai/langchain/pull/16649 (reviewed by
@baskaryan),
- **Twitter handle:**
[@ioniccommerce](https://twitter.com/ioniccommerce)
2024-01-28 16:39:49 -08:00
Christophe Bornet
2e3af04080 Use Postponed Evaluation of Annotations in Astra and Cassandra doc loaders (#16694)
Minor/cosmetic change
2024-01-28 16:39:27 -08:00
Yelin Zhang
bc7607a4e9 docs: remove iprogress warnings (#16697)
- **Description:** removes iprogress warning texts from notebooks,
resulting in a little nicer to read documentation
2024-01-28 16:38:14 -08:00
Erick Friis
0255c5808b infra: move release workflow back (#16707) 2024-01-28 12:11:23 -07:00
Erick Friis
88e3129587 robocorp: release 0.0.2 (#16706) 2024-01-28 11:28:58 -07:00
3782 changed files with 597074 additions and 122427 deletions

View File

@@ -3,43 +3,4 @@
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
To learn about how to contribute, please follow the [guides here](https://python.langchain.com/docs/contributing/)
## 🗺️ Guidelines
### 👩‍💻 Ways to contribute
There are many ways to contribute to LangChain. Here are some common ways people contribute:
- [**Documentation**](https://python.langchain.com/docs/contributing/documentation): Help improve our docs, including this one!
- [**Code**](https://python.langchain.com/docs/contributing/code): Help us write code, fix bugs, or improve our infrastructure.
- [**Integrations**](https://python.langchain.com/docs/contributing/integrations): Help us integrate with your favorite vendors and tools.
### 🚩GitHub Issues
Our [issues](https://github.com/langchain-ai/langchain/issues) page is kept up to date with bugs, improvements, and feature requests.
There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help organize issues.
If you start working on an issue, please assign it to yourself.
If you are adding an issue, please try to keep it focused on a single, modular bug/improvement/feature.
If two issues are related, or blocking, please link them rather than combining them.
We will try to keep these issues as up-to-date as possible, though
with the rapid rate of development in this field some may get out of date.
If you notice this happening, please let us know.
### 🙋Getting Help
Our goal is to have the simplest developer setup possible. Should you experience any difficulty getting setup, please
contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is
smooth for future contributors.
In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase.
If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
we do not want these to get in the way of getting good code into the codebase.
### Contributor Documentation
To learn about how to contribute, please follow the [guides here](https://python.langchain.com/docs/contributing/)
To learn how to contribute to LangChain, please follow the [contribution guide here](https://python.langchain.com/docs/contributing/).

View File

@@ -3,18 +3,18 @@ body:
- type: markdown
attributes:
value: |
Thanks for your interest in 🦜️🔗 LangChain!
Thanks for your interest in LangChain 🦜️🔗!
Please follow these instructions, fill every question, and do every step. 🙏
We're asking for this because answering questions and solving problems in GitHub takes a lot of time --
this is time that we cannot spend on adding new features, fixing bugs, write documentation or reviewing pull requests.
this is time that we cannot spend on adding new features, fixing bugs, writing documentation or reviewing pull requests.
By asking questions in a structured way (following this) it will be much easier to help you.
By asking questions in a structured way (following this) it will be much easier for us to help you.
And there's a high chance that you will find the solution along the way and you won't even have to submit it and wait for an answer. 😎
There's a high chance that by following this process, you'll find the solution on your own, eliminating the need to submit a question and wait for an answer. 😎
As there are too many questions, we will **DISCARD** and close the incomplete ones.
As there are many questions submitted every day, we will **DISCARD** and close the incomplete ones.
That will allow us (and others) to focus on helping people like you that follow the whole process. 🤓

View File

@@ -35,6 +35,8 @@ body:
required: true
- label: I am sure that this is a bug in LangChain rather than my code.
required: true
- label: The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
required: true
- type: textarea
id: reproduction
validations:

View File

@@ -4,13 +4,45 @@ title: "DOC: <Please write a comprehensive title after the 'DOC: ' prefix>"
labels: [03 - Documentation]
body:
- type: markdown
attributes:
value: >
Thank you for taking the time to report an issue in the documentation.
Only report issues with documentation here, explain if there are
any missing topics or if you found a mistake in the documentation.
Do **NOT** use this to ask usage questions or reporting issues with your code.
If you have usage questions or need help solving some problem,
please use [GitHub Discussions](https://github.com/langchain-ai/langchain/discussions).
If you're in the wrong place, here are some helpful links to find a better
place to ask your question:
[LangChain documentation with the integrated search](https://python.langchain.com/docs/get_started/introduction),
[API Reference](https://api.python.langchain.com/en/stable/),
[GitHub search](https://github.com/langchain-ai/langchain),
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: checkboxes
id: checks
attributes:
label: Checklist
description: Please confirm and check all the following options.
options:
- label: I added a very descriptive title to this issue.
required: true
- label: I included a link to the documentation page I am referring to (if applicable).
required: true
- type: textarea
attributes:
label: "Issue with current documentation:"
description: >
Please make sure to leave a reference to the document/code you're
referring to.
referring to. Feel free to include names of classes, functions, methods
or concepts you'd like to see documented more.
- type: textarea
attributes:
label: "Idea or request for content:"

View File

@@ -9,7 +9,7 @@ body:
If you are not a LangChain maintainer or were not asked directly by a maintainer to create an issue, then please start the conversation in a [Question in GitHub Discussions](https://github.com/langchain-ai/langchain/discussions/categories/q-a) instead.
You are a LangChain maintainer if you maintain any of the packages inside of the LangChain repository
or are a regular contributor to LangChain with previous merged merged pull requests.
or are a regular contributor to LangChain with previous merged pull requests.
- type: checkboxes
id: privileged
attributes:

View File

@@ -1,20 +1,29 @@
<!-- Thank you for contributing to LangChain!
Thank you for contributing to LangChain!
Please title your PR "<package>: <description>", where <package> is whichever of langchain, community, core, experimental, etc. is being modified.
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, experimental, etc. is being modified. Use "docs: ..." for purely docs changes, "templates: ..." for template changes, "infra: ..." for CI changes.
- Example: "community: add foobar LLM"
Replace this entire comment with:
- **Description:** a description of the change,
- **Issue:** the issue # it fixes if applicable,
- **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR gets announced, and you'd like a mention, we'll gladly shout you out!
Please make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` from the root of the package you've modified to check this locally.
- [ ] **PR message**: ***Delete this entire checklist*** and replace with
- **Description:** a description of the change
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a mention, we'll gladly shout you out!
See contribution guidelines for more information on how to write/run tests, lint, etc: https://python.langchain.com/docs/contributing/
If you're adding a new integration, please include:
- [ ] **Add tests and docs**: If you're adding a new integration, please include
1. a test for the integration, preferably unit tests that do not rely on network access,
2. an example notebook showing its use. It lives in `docs/docs/integrations` directory.
If no one reviews your PR within a few days, please @-mention one of @baskaryan, @eyurtsev, @hwchase17.
-->
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test` from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, hwchase17.

7
.github/actions/people/Dockerfile vendored Normal file
View File

@@ -0,0 +1,7 @@
FROM python:3.9
RUN pip install httpx PyGithub "pydantic==2.0.2" pydantic-settings "pyyaml>=5.3.1,<6.0.0"
COPY ./app /app
CMD ["python", "/app/main.py"]

11
.github/actions/people/action.yml vendored Normal file
View File

@@ -0,0 +1,11 @@
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/action.yml
name: "Generate LangChain People"
description: "Generate the data for the LangChain People page"
author: "Jacob Lee <jacob@langchain.dev>"
inputs:
token:
description: 'User token, to read the GitHub API. Can be passed in using {{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}'
required: true
runs:
using: 'docker'
image: 'Dockerfile'

641
.github/actions/people/app/main.py vendored Normal file
View File

@@ -0,0 +1,641 @@
# Adapted from https://github.com/tiangolo/fastapi/blob/master/.github/actions/people/app/main.py
import logging
import subprocess
import sys
from collections import Counter
from datetime import datetime, timedelta, timezone
from pathlib import Path
from typing import Any, Container, Dict, List, Set, Union
import httpx
import yaml
from github import Github
from pydantic import BaseModel, SecretStr
from pydantic_settings import BaseSettings
github_graphql_url = "https://api.github.com/graphql"
questions_category_id = "DIC_kwDOIPDwls4CS6Ve"
# discussions_query = """
# query Q($after: String, $category_id: ID) {
# repository(name: "langchain", owner: "langchain-ai") {
# discussions(first: 100, after: $after, categoryId: $category_id) {
# edges {
# cursor
# node {
# number
# author {
# login
# avatarUrl
# url
# }
# title
# createdAt
# comments(first: 100) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# isAnswer
# replies(first: 10) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# }
# }
# }
# }
# }
# }
# }
# }
# }
# """
# issues_query = """
# query Q($after: String) {
# repository(name: "langchain", owner: "langchain-ai") {
# issues(first: 100, after: $after) {
# edges {
# cursor
# node {
# number
# author {
# login
# avatarUrl
# url
# }
# title
# createdAt
# state
# comments(first: 100) {
# nodes {
# createdAt
# author {
# login
# avatarUrl
# url
# }
# }
# }
# }
# }
# }
# }
# }
# """
prs_query = """
query Q($after: String) {
repository(name: "langchain", owner: "langchain-ai") {
pullRequests(first: 100, after: $after, states: MERGED) {
edges {
cursor
node {
changedFiles
additions
deletions
number
labels(first: 100) {
nodes {
name
}
}
author {
login
avatarUrl
url
... on User {
twitterUsername
}
}
title
createdAt
state
reviews(first:100) {
nodes {
author {
login
avatarUrl
url
... on User {
twitterUsername
}
}
state
}
}
}
}
}
}
}
"""
class Author(BaseModel):
login: str
avatarUrl: str
url: str
twitterUsername: Union[str, None] = None
# Issues and Discussions
class CommentsNode(BaseModel):
createdAt: datetime
author: Union[Author, None] = None
class Replies(BaseModel):
nodes: List[CommentsNode]
class DiscussionsCommentsNode(CommentsNode):
replies: Replies
class Comments(BaseModel):
nodes: List[CommentsNode]
class DiscussionsComments(BaseModel):
nodes: List[DiscussionsCommentsNode]
class IssuesNode(BaseModel):
number: int
author: Union[Author, None] = None
title: str
createdAt: datetime
state: str
comments: Comments
class DiscussionsNode(BaseModel):
number: int
author: Union[Author, None] = None
title: str
createdAt: datetime
comments: DiscussionsComments
class IssuesEdge(BaseModel):
cursor: str
node: IssuesNode
class DiscussionsEdge(BaseModel):
cursor: str
node: DiscussionsNode
class Issues(BaseModel):
edges: List[IssuesEdge]
class Discussions(BaseModel):
edges: List[DiscussionsEdge]
class IssuesRepository(BaseModel):
issues: Issues
class DiscussionsRepository(BaseModel):
discussions: Discussions
class IssuesResponseData(BaseModel):
repository: IssuesRepository
class DiscussionsResponseData(BaseModel):
repository: DiscussionsRepository
class IssuesResponse(BaseModel):
data: IssuesResponseData
class DiscussionsResponse(BaseModel):
data: DiscussionsResponseData
# PRs
class LabelNode(BaseModel):
name: str
class Labels(BaseModel):
nodes: List[LabelNode]
class ReviewNode(BaseModel):
author: Union[Author, None] = None
state: str
class Reviews(BaseModel):
nodes: List[ReviewNode]
class PullRequestNode(BaseModel):
number: int
labels: Labels
author: Union[Author, None] = None
changedFiles: int
additions: int
deletions: int
title: str
createdAt: datetime
state: str
reviews: Reviews
# comments: Comments
class PullRequestEdge(BaseModel):
cursor: str
node: PullRequestNode
class PullRequests(BaseModel):
edges: List[PullRequestEdge]
class PRsRepository(BaseModel):
pullRequests: PullRequests
class PRsResponseData(BaseModel):
repository: PRsRepository
class PRsResponse(BaseModel):
data: PRsResponseData
class Settings(BaseSettings):
input_token: SecretStr
github_repository: str
httpx_timeout: int = 30
def get_graphql_response(
*,
settings: Settings,
query: str,
after: Union[str, None] = None,
category_id: Union[str, None] = None,
) -> Dict[str, Any]:
headers = {"Authorization": f"token {settings.input_token.get_secret_value()}"}
# category_id is only used by one query, but GraphQL allows unused variables, so
# keep it here for simplicity
variables = {"after": after, "category_id": category_id}
response = httpx.post(
github_graphql_url,
headers=headers,
timeout=settings.httpx_timeout,
json={"query": query, "variables": variables, "operationName": "Q"},
)
if response.status_code != 200:
logging.error(
f"Response was not 200, after: {after}, category_id: {category_id}"
)
logging.error(response.text)
raise RuntimeError(response.text)
data = response.json()
if "errors" in data:
logging.error(f"Errors in response, after: {after}, category_id: {category_id}")
logging.error(data["errors"])
logging.error(response.text)
raise RuntimeError(response.text)
return data
# def get_graphql_issue_edges(*, settings: Settings, after: Union[str, None] = None):
# data = get_graphql_response(settings=settings, query=issues_query, after=after)
# graphql_response = IssuesResponse.model_validate(data)
# return graphql_response.data.repository.issues.edges
# def get_graphql_question_discussion_edges(
# *,
# settings: Settings,
# after: Union[str, None] = None,
# ):
# data = get_graphql_response(
# settings=settings,
# query=discussions_query,
# after=after,
# category_id=questions_category_id,
# )
# graphql_response = DiscussionsResponse.model_validate(data)
# return graphql_response.data.repository.discussions.edges
def get_graphql_pr_edges(*, settings: Settings, after: Union[str, None] = None):
if after is None:
print("Querying PRs...")
else:
print(f"Querying PRs with cursor {after}...")
data = get_graphql_response(
settings=settings,
query=prs_query,
after=after
)
graphql_response = PRsResponse.model_validate(data)
return graphql_response.data.repository.pullRequests.edges
# def get_issues_experts(settings: Settings):
# issue_nodes: List[IssuesNode] = []
# issue_edges = get_graphql_issue_edges(settings=settings)
# while issue_edges:
# for edge in issue_edges:
# issue_nodes.append(edge.node)
# last_edge = issue_edges[-1]
# issue_edges = get_graphql_issue_edges(settings=settings, after=last_edge.cursor)
# commentors = Counter()
# last_month_commentors = Counter()
# authors: Dict[str, Author] = {}
# now = datetime.now(tz=timezone.utc)
# one_month_ago = now - timedelta(days=30)
# for issue in issue_nodes:
# issue_author_name = None
# if issue.author:
# authors[issue.author.login] = issue.author
# issue_author_name = issue.author.login
# issue_commentors = set()
# for comment in issue.comments.nodes:
# if comment.author:
# authors[comment.author.login] = comment.author
# if comment.author.login != issue_author_name:
# issue_commentors.add(comment.author.login)
# for author_name in issue_commentors:
# commentors[author_name] += 1
# if issue.createdAt > one_month_ago:
# last_month_commentors[author_name] += 1
# return commentors, last_month_commentors, authors
# def get_discussions_experts(settings: Settings):
# discussion_nodes: List[DiscussionsNode] = []
# discussion_edges = get_graphql_question_discussion_edges(settings=settings)
# while discussion_edges:
# for discussion_edge in discussion_edges:
# discussion_nodes.append(discussion_edge.node)
# last_edge = discussion_edges[-1]
# discussion_edges = get_graphql_question_discussion_edges(
# settings=settings, after=last_edge.cursor
# )
# commentors = Counter()
# last_month_commentors = Counter()
# authors: Dict[str, Author] = {}
# now = datetime.now(tz=timezone.utc)
# one_month_ago = now - timedelta(days=30)
# for discussion in discussion_nodes:
# discussion_author_name = None
# if discussion.author:
# authors[discussion.author.login] = discussion.author
# discussion_author_name = discussion.author.login
# discussion_commentors = set()
# for comment in discussion.comments.nodes:
# if comment.author:
# authors[comment.author.login] = comment.author
# if comment.author.login != discussion_author_name:
# discussion_commentors.add(comment.author.login)
# for reply in comment.replies.nodes:
# if reply.author:
# authors[reply.author.login] = reply.author
# if reply.author.login != discussion_author_name:
# discussion_commentors.add(reply.author.login)
# for author_name in discussion_commentors:
# commentors[author_name] += 1
# if discussion.createdAt > one_month_ago:
# last_month_commentors[author_name] += 1
# return commentors, last_month_commentors, authors
# def get_experts(settings: Settings):
# (
# discussions_commentors,
# discussions_last_month_commentors,
# discussions_authors,
# ) = get_discussions_experts(settings=settings)
# commentors = discussions_commentors
# last_month_commentors = discussions_last_month_commentors
# authors = {**discussions_authors}
# return commentors, last_month_commentors, authors
def _logistic(x, k):
return x / (x + k)
def get_contributors(settings: Settings):
pr_nodes: List[PullRequestNode] = []
pr_edges = get_graphql_pr_edges(settings=settings)
while pr_edges:
for edge in pr_edges:
pr_nodes.append(edge.node)
last_edge = pr_edges[-1]
pr_edges = get_graphql_pr_edges(settings=settings, after=last_edge.cursor)
contributors = Counter()
contributor_scores = Counter()
recent_contributor_scores = Counter()
reviewers = Counter()
authors: Dict[str, Author] = {}
for pr in pr_nodes:
pr_reviewers: Set[str] = set()
for review in pr.reviews.nodes:
if review.author:
authors[review.author.login] = review.author
pr_reviewers.add(review.author.login)
for reviewer in pr_reviewers:
reviewers[reviewer] += 1
if pr.author:
authors[pr.author.login] = pr.author
contributors[pr.author.login] += 1
files_changed = pr.changedFiles
lines_changed = pr.additions + pr.deletions
score = _logistic(files_changed, 20) + _logistic(lines_changed, 100)
contributor_scores[pr.author.login] += score
three_months_ago = (datetime.now(timezone.utc) - timedelta(days=3*30))
if pr.createdAt > three_months_ago:
recent_contributor_scores[pr.author.login] += score
return contributors, contributor_scores, recent_contributor_scores, reviewers, authors
def get_top_users(
*,
counter: Counter,
min_count: int,
authors: Dict[str, Author],
skip_users: Container[str],
):
users = []
for commentor, count in counter.most_common():
if commentor in skip_users:
continue
if count >= min_count:
author = authors[commentor]
users.append(
{
"login": commentor,
"count": count,
"avatarUrl": author.avatarUrl,
"twitterUsername": author.twitterUsername,
"url": author.url,
}
)
return users
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
settings = Settings()
logging.info(f"Using config: {settings.model_dump_json()}")
g = Github(settings.input_token.get_secret_value())
repo = g.get_repo(settings.github_repository)
# question_commentors, question_last_month_commentors, question_authors = get_experts(
# settings=settings
# )
contributors, contributor_scores, recent_contributor_scores, reviewers, pr_authors = get_contributors(
settings=settings
)
# authors = {**question_authors, **pr_authors}
authors = {**pr_authors}
maintainers_logins = {
"hwchase17",
"agola11",
"baskaryan",
"hinthornw",
"nfcampos",
"efriis",
"eyurtsev",
"rlancemartin"
}
hidden_logins = {
"dev2049",
"vowelparrot",
"obi1kenobi",
"langchain-infra",
"jacoblee93",
"dqbd",
"bracesproul",
"akira",
}
bot_names = {"dosubot", "github-actions", "CodiumAI-Agent"}
maintainers = []
for login in maintainers_logins:
user = authors[login]
maintainers.append(
{
"login": login,
"count": contributors[login], #+ question_commentors[login],
"avatarUrl": user.avatarUrl,
"twitterUsername": user.twitterUsername,
"url": user.url,
}
)
# min_count_expert = 10
# min_count_last_month = 3
min_score_contributor = 1
min_count_reviewer = 5
skip_users = maintainers_logins | bot_names | hidden_logins
# experts = get_top_users(
# counter=question_commentors,
# min_count=min_count_expert,
# authors=authors,
# skip_users=skip_users,
# )
# last_month_active = get_top_users(
# counter=question_last_month_commentors,
# min_count=min_count_last_month,
# authors=authors,
# skip_users=skip_users,
# )
top_recent_contributors = get_top_users(
counter=recent_contributor_scores,
min_count=min_score_contributor,
authors=authors,
skip_users=skip_users,
)
top_contributors = get_top_users(
counter=contributor_scores,
min_count=min_score_contributor,
authors=authors,
skip_users=skip_users,
)
top_reviewers = get_top_users(
counter=reviewers,
min_count=min_count_reviewer,
authors=authors,
skip_users=skip_users,
)
people = {
"maintainers": maintainers,
# "experts": experts,
# "last_month_active": last_month_active,
"top_recent_contributors": top_recent_contributors,
"top_contributors": top_contributors,
"top_reviewers": top_reviewers,
}
people_path = Path("./docs/data/people.yml")
people_old_content = people_path.read_text(encoding="utf-8")
new_people_content = yaml.dump(
people, sort_keys=False, width=200, allow_unicode=True
)
if (
people_old_content == new_people_content
):
logging.info("The LangChain People data hasn't changed, finishing.")
sys.exit(0)
people_path.write_text(new_people_content, encoding="utf-8")
logging.info("Setting up GitHub Actions git user")
subprocess.run(["git", "config", "user.name", "github-actions"], check=True)
subprocess.run(
["git", "config", "user.email", "github-actions@github.com"], check=True
)
branch_name = "langchain/langchain-people"
logging.info(f"Creating a new branch {branch_name}")
subprocess.run(["git", "checkout", "-B", branch_name], check=True)
logging.info("Adding updated file")
subprocess.run(
["git", "add", str(people_path)], check=True
)
logging.info("Committing updated file")
message = "👥 Update LangChain people data"
result = subprocess.run(["git", "commit", "-m", message], check=True)
logging.info("Pushing branch")
subprocess.run(["git", "push", "origin", branch_name, "-f"], check=True)
logging.info("Creating PR")
pr = repo.create_pull(title=message, body=message, base="master", head=branch_name)
logging.info(f"Created PR: {pr.number}")
logging.info("Finished")

View File

@@ -32,7 +32,7 @@ runs:
with:
python-version: ${{ inputs.python-version }}
- uses: actions/cache@v3
- uses: actions/cache@v4
id: cache-bin-poetry
name: Cache Poetry binary - Python ${{ inputs.python-version }}
env:
@@ -79,7 +79,7 @@ runs:
run: pipx install "poetry==$POETRY_VERSION" --python '${{ steps.setup-python.outputs.python-path }}' --verbose
- name: Restore pip and poetry cached dependencies
uses: actions/cache@v3
uses: actions/cache@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "4"
WORKDIR: ${{ inputs.working-directory == '' && '.' || inputs.working-directory }}

View File

@@ -1,17 +1,24 @@
import json
import sys
import os
from typing import Dict
LANGCHAIN_DIRS = {
LANGCHAIN_DIRS = [
"libs/core",
"libs/text-splitters",
"libs/community",
"libs/langchain",
"libs/experimental",
"libs/community",
}
]
if __name__ == "__main__":
files = sys.argv[1:]
dirs_to_run = set()
dirs_to_run: Dict[str, set] = {
"lint": set(),
"test": set(),
"extended-test": set(),
}
if len(files) == 300:
# max diff length is 300 files - there are likely files missing
@@ -24,33 +31,49 @@ if __name__ == "__main__":
".github/workflows",
".github/tools",
".github/actions",
"libs/core",
".github/scripts/check_diff.py",
)
):
dirs_to_run.update(LANGCHAIN_DIRS)
elif "libs/community" in file:
dirs_to_run.update(
("libs/community", "libs/langchain", "libs/experimental")
)
elif "libs/partners" in file:
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}"):
dirs_to_run.update(
(
f"libs/partners/{partner_dir}",
"libs/langchain",
"libs/experimental",
)
)
# Skip if the directory was deleted
elif "libs/langchain" in file:
dirs_to_run.update(("libs/langchain", "libs/experimental"))
elif "libs/experimental" in file:
dirs_to_run.add("libs/experimental")
elif file.startswith("libs/"):
dirs_to_run.update(LANGCHAIN_DIRS)
else:
# add all LANGCHAIN_DIRS for infra changes
dirs_to_run["extended-test"].update(LANGCHAIN_DIRS)
dirs_to_run["lint"].add(".")
if any(file.startswith(dir_) for dir_ in LANGCHAIN_DIRS):
# add that dir and all dirs after in LANGCHAIN_DIRS
# for extended testing
found = False
for dir_ in LANGCHAIN_DIRS:
if file.startswith(dir_):
found = True
if found:
dirs_to_run["extended-test"].add(dir_)
elif file.startswith("libs/cli"):
# todo: add cli makefile
pass
json_output = json.dumps(list(dirs_to_run))
print(f"dirs-to-run={json_output}")
elif file.startswith("libs/partners"):
partner_dir = file.split("/")[2]
if os.path.isdir(f"libs/partners/{partner_dir}") and [
filename
for filename in os.listdir(f"libs/partners/{partner_dir}")
if not filename.startswith(".")
] != ["README.md"]:
dirs_to_run["test"].add(f"libs/partners/{partner_dir}")
# Skip if the directory was deleted or is just a tombstone readme
elif file.startswith("libs/"):
raise ValueError(
f"Unknown lib: {file}. check_diff.py likely needs "
"an update for this new library!"
)
elif any(file.startswith(p) for p in ["docs/", "templates/", "cookbook/"]):
dirs_to_run["lint"].add(".")
outputs = {
"dirs-to-lint": list(
dirs_to_run["lint"] | dirs_to_run["test"] | dirs_to_run["extended-test"]
),
"dirs-to-test": list(dirs_to_run["test"] | dirs_to_run["extended-test"]),
"dirs-to-extended-test": list(dirs_to_run["extended-test"]),
}
for key, value in outputs.items():
json_output = json.dumps(value)
print(f"{key}={json_output}") # noqa: T201

67
.github/scripts/get_min_versions.py vendored Normal file
View File

@@ -0,0 +1,67 @@
import sys
import tomllib
from packaging.version import parse as parse_version
import re
MIN_VERSION_LIBS = ["langchain-core", "langchain-community", "langchain", "langchain-text-splitters"]
def get_min_version(version: str) -> str:
# case ^x.x.x
_match = re.match(r"^\^(\d+(?:\.\d+){0,2})$", version)
if _match:
return _match.group(1)
# case >=x.x.x,<y.y.y
_match = re.match(r"^>=(\d+(?:\.\d+){0,2}),<(\d+(?:\.\d+){0,2})$", version)
if _match:
_min = _match.group(1)
_max = _match.group(2)
assert parse_version(_min) < parse_version(_max)
return _min
# case x.x.x
_match = re.match(r"^(\d+(?:\.\d+){0,2})$", version)
if _match:
return _match.group(1)
raise ValueError(f"Unrecognized version format: {version}")
def get_min_version_from_toml(toml_path: str):
# Parse the TOML file
with open(toml_path, "rb") as file:
toml_data = tomllib.load(file)
# Get the dependencies from tool.poetry.dependencies
dependencies = toml_data["tool"]["poetry"]["dependencies"]
# Initialize a dictionary to store the minimum versions
min_versions = {}
# Iterate over the libs in MIN_VERSION_LIBS
for lib in MIN_VERSION_LIBS:
# Check if the lib is present in the dependencies
if lib in dependencies:
# Get the version string
version_string = dependencies[lib]
# Use parse_version to get the minimum supported version from version_string
min_version = get_min_version(version_string)
# Store the minimum version in the min_versions dictionary
min_versions[lib] = min_version
return min_versions
# Get the TOML file path from the command line argument
toml_file = sys.argv[1]
# Call the function to get the minimum versions
min_versions = get_min_version_from_toml(toml_file)
print(
" ".join([f"{lib}=={version}" for lib, version in min_versions.items()])
) # noqa: T201

View File

@@ -1,106 +0,0 @@
---
name: langchain CI
on:
workflow_call:
inputs:
working-directory:
required: true
type: string
description: "From which folder this pipeline executes"
workflow_dispatch:
inputs:
working-directory:
required: true
type: choice
default: 'libs/langchain'
options:
- libs/langchain
- libs/core
- libs/experimental
- libs/community
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}-${{ inputs.working-directory }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.7.1"
jobs:
lint:
uses: ./.github/workflows/_lint.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
test:
uses: ./.github/workflows/_test.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
compile-integration-tests:
uses: ./.github/workflows/_compile_integration_test.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
dependencies:
uses: ./.github/workflows/_dependencies.yml
with:
working-directory: ${{ inputs.working-directory }}
secrets: inherit
extended-tests:
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: Python ${{ matrix.python-version }} extended tests
defaults:
run:
working-directory: ${{ inputs.working-directory }}
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ inputs.working-directory }}
cache-key: extended
- name: Install dependencies
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing --with test
- name: Run extended tests
run: make extended_tests
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -24,7 +24,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
name: Python ${{ matrix.python-version }}
name: "poetry run pytest -m compile tests/integration_tests #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -28,7 +28,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
name: dependencies - Python ${{ matrix.python-version }}
name: dependency checks ${{ matrix.python-version }}
steps:
- uses: actions/checkout@v4
@@ -63,6 +63,8 @@ jobs:
- name: Install the opposite major version of pydantic
# If normal tests use pydantic v1, here we'll use v2, and vice versa.
shell: bash
# airbyte currently doesn't support pydantic v2
if: ${{ !startsWith(inputs.working-directory, 'libs/partners/airbyte') }}
run: |
# Determine the major part of pydantic version
REGULAR_VERSION=$(poetry run python -c "import pydantic; print(pydantic.__version__)" | cut -d. -f1)
@@ -97,6 +99,8 @@ jobs:
fi
echo "Found pydantic version ${CURRENT_VERSION}, as expected"
- name: Run pydantic compatibility tests
# airbyte currently doesn't support pydantic v2
if: ${{ !startsWith(inputs.working-directory, 'libs/partners/airbyte') }}
shell: bash
run: make test

View File

@@ -38,6 +38,11 @@ jobs:
shell: bash
run: poetry install --with test,test_integration
- name: Install deps outside pyproject
if: ${{ startsWith(inputs.working-directory, 'libs/community/') }}
shell: bash
run: poetry run pip install "boto3<2" "google-cloud-aiplatform<2"
- name: 'Authenticate to Google Cloud'
id: 'auth'
uses: google-github-actions/auth@v2
@@ -47,6 +52,7 @@ jobs:
- name: Run integration tests
shell: bash
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
@@ -56,6 +62,19 @@ jobs:
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
run: |
make integration_tests

View File

@@ -21,6 +21,7 @@ env:
jobs:
build:
name: "make lint #${{ matrix.python-version }}"
runs-on: ubuntu-latest
strategy:
matrix:
@@ -79,13 +80,13 @@ jobs:
poetry run pip install -e "$LANGCHAIN_LOCATION"
- name: Get .mypy_cache to speed up mypy
uses: actions/cache@v3
uses: actions/cache@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
with:
path: |
${{ env.WORKDIR }}/.mypy_cache
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
- name: Analysing the code with our lint
@@ -93,7 +94,7 @@ jobs:
run: |
make lint_package
- name: Install test dependencies
- name: Install unit test dependencies
# Also installs dev/lint/test/typing dependencies, to ensure we have
# type hints for as many of our libraries as possible.
# This helps catch errors that require dependencies to be spotted, for example:
@@ -102,18 +103,24 @@ jobs:
# If you change this configuration, make sure to change the `cache-key`
# in the `poetry_setup` action above to stop using the old cache.
# It doesn't matter how you change it, any change will cause a cache-bust.
if: ${{ ! startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --with test
- name: Install unit+integration test dependencies
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
working-directory: ${{ inputs.working-directory }}
run: |
poetry install --with test,test_integration
- name: Get .mypy_cache_test to speed up mypy
uses: actions/cache@v3
uses: actions/cache@v4
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
with:
path: |
${{ env.WORKDIR }}/.mypy_cache_test
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', inputs.working-directory)) }}
- name: Analysing the code with our lint
working-directory: ${{ inputs.working-directory }}

View File

@@ -15,7 +15,7 @@ on:
default: 'libs/langchain'
env:
PYTHON_VERSION: "3.10"
PYTHON_VERSION: "3.11"
POETRY_VERSION: "1.7.1"
jobs:
@@ -55,7 +55,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
- name: Upload build
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
@@ -166,22 +166,54 @@ jobs:
- name: Run integration tests
if: ${{ startsWith(inputs.working-directory, 'libs/partners/') }}
env:
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
GOOGLE_API_KEY: ${{ secrets.GOOGLE_API_KEY }}
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
AZURE_OPENAI_API_VERSION: ${{ secrets.AZURE_OPENAI_API_VERSION }}
AZURE_OPENAI_API_BASE: ${{ secrets.AZURE_OPENAI_API_BASE }}
AZURE_OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_API_KEY }}
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
EXA_API_KEY: ${{ secrets.EXA_API_KEY }}
NOMIC_API_KEY: ${{ secrets.NOMIC_API_KEY }}
WATSONX_APIKEY: ${{ secrets.WATSONX_APIKEY }}
WATSONX_PROJECT_ID: ${{ secrets.WATSONX_PROJECT_ID }}
PINECONE_API_KEY: ${{ secrets.PINECONE_API_KEY }}
PINECONE_ENVIRONMENT: ${{ secrets.PINECONE_ENVIRONMENT }}
ASTRA_DB_API_ENDPOINT: ${{ secrets.ASTRA_DB_API_ENDPOINT }}
ASTRA_DB_APPLICATION_TOKEN: ${{ secrets.ASTRA_DB_APPLICATION_TOKEN }}
ASTRA_DB_KEYSPACE: ${{ secrets.ASTRA_DB_KEYSPACE }}
ES_URL: ${{ secrets.ES_URL }}
ES_CLOUD_ID: ${{ secrets.ES_CLOUD_ID }}
ES_API_KEY: ${{ secrets.ES_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
- name: Run unit tests with minimum dependency versions
if: ${{ (inputs.working-directory == 'libs/langchain') || (inputs.working-directory == 'libs/community') || (inputs.working-directory == 'libs/experimental') }}
- name: Get minimum versions
working-directory: ${{ inputs.working-directory }}
id: min-version
run: |
poetry run pip install -r _test_minimum_requirements.txt
poetry run pip install packaging
min_versions="$(poetry run python $GITHUB_WORKSPACE/.github/scripts/get_min_versions.py pyproject.toml)"
echo "min-versions=$min_versions" >> "$GITHUB_OUTPUT"
echo "min-versions=$min_versions"
- name: Run unit tests with minimum dependency versions
if: ${{ steps.min-version.outputs.min-versions != '' }}
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
poetry run pip install $MIN_VERSIONS
make tests
working-directory: ${{ inputs.working-directory }}
@@ -214,7 +246,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
cache-key: release
- uses: actions/download-artifact@v3
- uses: actions/download-artifact@v4
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
@@ -253,7 +285,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
cache-key: release
- uses: actions/download-artifact@v3
- uses: actions/download-artifact@v4
with:
name: dist
path: ${{ inputs.working-directory }}/dist/

View File

@@ -28,7 +28,7 @@ jobs:
- "3.9"
- "3.10"
- "3.11"
name: Python ${{ matrix.python-version }}
name: "make test #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4

View File

@@ -48,7 +48,7 @@ jobs:
working-directory: ${{ inputs.working-directory }}
- name: Upload build
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/
@@ -76,7 +76,7 @@ jobs:
steps:
- uses: actions/checkout@v4
- uses: actions/download-artifact@v3
- uses: actions/download-artifact@v4
with:
name: test-dist
path: ${{ inputs.working-directory }}/dist/

View File

@@ -1,5 +1,5 @@
---
name: Check library diffs
name: CI
on:
push:
@@ -16,6 +16,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.7.1"
jobs:
build:
runs-on: ubuntu-latest
@@ -30,14 +33,119 @@ jobs:
run: |
python .github/scripts/check_diff.py ${{ steps.files.outputs.all }} >> $GITHUB_OUTPUT
outputs:
dirs-to-run: ${{ steps.set-matrix.outputs.dirs-to-run }}
ci:
dirs-to-lint: ${{ steps.set-matrix.outputs.dirs-to-lint }}
dirs-to-test: ${{ steps.set-matrix.outputs.dirs-to-test }}
dirs-to-extended-test: ${{ steps.set-matrix.outputs.dirs-to-extended-test }}
lint:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-lint != '[]' }}
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-run) }}
uses: ./.github/workflows/_all_ci.yml
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-lint) }}
uses: ./.github/workflows/_lint.yml
with:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
test:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
uses: ./.github/workflows/_test.yml
with:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
compile-integration-tests:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
uses: ./.github/workflows/_compile_integration_test.yml
with:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
dependencies:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-test != '[]' }}
strategy:
matrix:
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-test) }}
uses: ./.github/workflows/_dependencies.yml
with:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
extended-tests:
name: "cd ${{ matrix.working-directory }} / make extended_tests #${{ matrix.python-version }}"
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-extended-test != '[]' }}
strategy:
matrix:
# note different variable for extended test dirs
working-directory: ${{ fromJson(needs.build.outputs.dirs-to-extended-test) }}
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
runs-on: ubuntu-latest
defaults:
run:
working-directory: ${{ matrix.working-directory }}
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: ${{ matrix.working-directory }}
cache-key: extended
- name: Install dependencies
shell: bash
run: |
echo "Running extended tests, installing dependencies with poetry..."
poetry install -E extended_testing --with test
- name: Run extended tests
run: make extended_tests
- name: Ensure the tests did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'
ci_success:
name: "CI Success"
needs: [build, lint, test, compile-integration-tests, dependencies, extended-tests]
if: |
always()
runs-on: ubuntu-latest
env:
JOBS_JSON: ${{ toJSON(needs) }}
RESULTS_JSON: ${{ toJSON(needs.*.result) }}
EXIT_CODE: ${{!contains(needs.*.result, 'failure') && !contains(needs.*.result, 'cancelled') && '0' || '1'}}
steps:
- name: "CI Success"
run: |
echo $JOBS_JSON
echo $RESULTS_JSON
echo "Exiting with $EXIT_CODE"
exit $EXIT_CODE

View File

@@ -1,5 +1,5 @@
---
name: Codespell
name: CI / cd . / make spell_check
on:
push:
@@ -12,7 +12,7 @@ permissions:
jobs:
codespell:
name: Check for spelling errors
name: (Check for spelling errors)
runs-on: ubuntu-latest
steps:
@@ -32,5 +32,6 @@ jobs:
- name: Codespell
uses: codespell-project/actions-codespell@v2
with:
skip: guide_imports.json
skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
exclude_file: libs/community/langchain_community/llms/yuan2.py

View File

@@ -1,35 +0,0 @@
---
name: Docs, templates, cookbook lint
on:
push:
branches: [ master ]
pull_request:
paths:
- 'docs/**'
- 'templates/**'
- 'cookbook/**'
- '.github/workflows/_lint.yml'
- '.github/workflows/doc_lint.yml'
workflow_dispatch:
jobs:
check:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Run import check
run: |
# We should not encourage imports directly from main init file
# Expect for hub
git grep 'from langchain import' {docs/docs,templates,cookbook} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: "."
secrets: inherit

View File

@@ -7,4 +7,4 @@ ignore_words_list = (
pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
)
print(f"::set-output name=ignore_words_list::{ignore_words_list}")
print(f"::set-output name=ignore_words_list::{ignore_words_list}") # noqa: T201

36
.github/workflows/people.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
name: LangChain People
on:
schedule:
- cron: "0 14 1 * *"
push:
branches: [jacob/people]
workflow_dispatch:
inputs:
debug_enabled:
description: 'Run the build with tmate debugging enabled (https://github.com/marketplace/actions/debugging-with-tmate)'
required: false
default: 'false'
jobs:
langchain-people:
if: github.repository_owner == 'langchain-ai'
runs-on: ubuntu-latest
steps:
- name: Dump GitHub context
env:
GITHUB_CONTEXT: ${{ toJson(github) }}
run: echo "$GITHUB_CONTEXT"
- uses: actions/checkout@v4
# Ref: https://github.com/actions/runner/issues/2033
- name: Fix git safe.directory in container
run: mkdir -p /home/runner/work/_temp/_github_home && printf "[safe]\n\tdirectory = /github/workspace" > /home/runner/work/_temp/_github_home/.gitconfig
# Allow debugging with tmate
- name: Setup tmate session
uses: mxschmitt/action-tmate@v3
if: ${{ github.event_name == 'workflow_dispatch' && github.event.inputs.debug_enabled == 'true' }}
with:
limit-access-to-actor: true
- uses: ./.github/actions/people
with:
token: ${{ secrets.LANGCHAIN_PEOPLE_GITHUB_TOKEN }}

View File

@@ -54,6 +54,11 @@ jobs:
echo "Running scheduled tests, installing dependencies with poetry..."
poetry install --with=test_integration,test
- name: Install deps outside pyproject
if: ${{ startsWith(inputs.working-directory, 'libs/community/') }}
shell: bash
run: poetry run pip install "boto3<2" "google-cloud-aiplatform<2"
- name: Run tests
shell: bash
env:

View File

@@ -1,36 +0,0 @@
---
name: templates CI
on:
push:
branches: [ master ]
pull_request:
paths:
- '.github/actions/poetry_setup/action.yml'
- '.github/tools/**'
- '.github/workflows/_lint.yml'
- '.github/workflows/templates_ci.yml'
- 'templates/**'
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
#
# There's no point in testing an outdated version of the code. GitHub only allows
# a limited number of job runners to be active at the same time, so it's better to cancel
# pointless jobs early so that more useful jobs can run sooner.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
POETRY_VERSION: "1.7.1"
WORKDIR: "templates"
jobs:
lint:
uses:
./.github/workflows/_lint.yml
with:
working-directory: templates
secrets: inherit

9
.gitignore vendored
View File

@@ -115,13 +115,10 @@ celerybeat.pid
# Environments
.env
.envrc
.venv
.venvs
.venv*
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
@@ -177,4 +174,6 @@ docs/docs/build
docs/docs/node_modules
docs/docs/yarn.lock
_dist
docs/docs/templates
docs/docs/templates
prof

View File

@@ -13,15 +13,8 @@ build:
tools:
python: "3.11"
commands:
- python -m virtualenv $READTHEDOCS_VIRTUALENV_PATH
- python -m pip install --upgrade --no-cache-dir pip setuptools
- python -m pip install --upgrade --no-cache-dir sphinx readthedocs-sphinx-ext
- python -m pip install ./libs/partners/*
- python -m pip install --exists-action=w --no-cache-dir -r docs/api_reference/requirements.txt
- python docs/api_reference/create_api_rst.py
- cat docs/api_reference/conf.py
- python -m sphinx -T -E -b html -d _build/doctrees -c docs/api_reference docs/api_reference $READTHEDOCS_OUTPUT/html -j auto
- mkdir -p $READTHEDOCS_OUTPUT
- cp -r api_reference_build/* $READTHEDOCS_OUTPUT
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/api_reference/conf.py

View File

@@ -15,7 +15,12 @@ docs_build:
docs/.local_build.sh
docs_clean:
rm -r _dist
@if [ -d _dist ]; then \
rm -r _dist; \
echo "Directory _dist has been cleaned."; \
else \
echo "Nothing to clean."; \
fi
docs_linkcheck:
poetry run linkchecker _dist/docs/ --ignore-url node_modules
@@ -45,11 +50,13 @@ lint lint_package lint_tests:
poetry run ruff docs templates cookbook
poetry run ruff format docs templates cookbook --diff
poetry run ruff --select I docs templates cookbook
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
format format_diff:
poetry run ruff format docs templates cookbook
poetry run ruff --select I --fix docs templates cookbook
######################
# HELP
######################

View File

@@ -1,6 +1,6 @@
# 🦜️🔗 LangChain
⚡ Building applications with LLMs through composability
⚡ Build context-aware reasoning applications
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
@@ -18,7 +18,7 @@ Looking for the JS/TS library? Check out [LangChain.js](https://github.com/langc
To help you ship LangChain apps to production faster, check out [LangSmith](https://smith.langchain.com).
[LangSmith](https://smith.langchain.com) is a unified developer platform for building, testing, and monitoring LLM applications.
Fill out [this form](https://airtable.com/appwQzlErAS2qiP0L/shrGtGaVBVAz7NcV2) to get off the waitlist or speak with our sales team.
Fill out [this form](https://www.langchain.com/contact-sales) to speak with our sales team.
## Quick Install
@@ -43,13 +43,14 @@ This framework consists of several parts.
- **[LangChain Templates](templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
- **[LangServe](https://github.com/langchain-ai/langserve)**: A library for deploying LangChain chains as a REST API.
- **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
- **[LangGraph](https://python.langchain.com/docs/langgraph)**: LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.
The LangChain libraries themselves are made up of several different packages.
- **[`langchain-core`](libs/core)**: Base abstractions and LangChain Expression Language.
- **[`langchain-community`](libs/community)**: Third party integrations.
- **[`langchain`](libs/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/img/langchain_stack.png "LangChain Architecture Overview")
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack.svg "LangChain Architecture Overview")
## 🧱 What can you build with LangChain?
**❓ Retrieval augmented generation**

View File

@@ -1,6 +1,61 @@
# Security Policy
## Reporting a Vulnerability
## Reporting OSS Vulnerabilities
Please report security vulnerabilities by email to `security@langchain.dev`.
This email is an alias to a subset of our maintainers, and will ensure the issue is promptly triaged and acted upon as needed.
LangChain is partnered with [huntr by Protect AI](https://huntr.com/) to provide
a bounty program for our open source projects.
Please report security vulnerabilities associated with the LangChain
open source projects by visiting the following link:
[https://huntr.com/bounties/disclose/](https://huntr.com/bounties/disclose/?target=https%3A%2F%2Fgithub.com%2Flangchain-ai%2Flangchain&validSearch=true)
Before reporting a vulnerability, please review:
1) In-Scope Targets and Out-of-Scope Targets below.
2) The [langchain-ai/langchain](https://python.langchain.com/docs/contributing/repo_structure) monorepo structure.
3) LangChain [security guidelines](https://python.langchain.com/docs/security) to
understand what we consider to be a security vulnerability vs. developer
responsibility.
### In-Scope Targets
The following packages and repositories are eligible for bug bounties:
- langchain-core
- langchain (see exceptions)
- langchain-community (see exceptions)
- langgraph
- langserve
### Out of Scope Targets
All out of scope targets defined by huntr as well as:
- **langchain-experimental**: This repository is for experimental code and is not
eligible for bug bounties, bug reports to it will be marked as interesting or waste of
time and published with no bounty attached.
- **tools**: Tools in either langchain or langchain-community are not eligible for bug
bounties. This includes the following directories
- langchain/tools
- langchain-community/tools
- Please review our [security guidelines](https://python.langchain.com/docs/security)
for more details, but generally tools interact with the real world. Developers are
expected to understand the security implications of their code and are responsible
for the security of their tools.
- Code documented with security notices. This will be decided done on a case by
case basis, but likely will not be eligible for a bounty as the code is already
documented with guidelines for developers that should be followed for making their
application secure.
- Any LangSmith related repositories or APIs see below.
## Reporting LangSmith Vulnerabilities
Please report security vulnerabilities associated with LangSmith by email to `security@langchain.dev`.
- LangSmith site: https://smith.langchain.com
- SDK client: https://github.com/langchain-ai/langsmith-sdk
### Other Security Concerns
For any other security concerns, please contact us at `security@langchain.dev`.

View File

@@ -0,0 +1,932 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "BYejgj8Zf-LG",
"tags": []
},
"source": [
"## Getting started with LangChain and Gemma, running locally or in the Cloud"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "2IxjMb9-jIJ8"
},
"source": [
"### Installing dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 9436,
"status": "ok",
"timestamp": 1708975187360,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "XZaTsXfcheTF",
"outputId": "eb21d603-d824-46c5-f99f-087fb2f618b1",
"tags": []
},
"outputs": [],
"source": [
"!pip install --upgrade langchain langchain-google-vertexai"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IXmAujvC3Kwp"
},
"source": [
"### Running the model"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "CI8Elyc5gBQF"
},
"source": [
"Go to the VertexAI Model Garden on Google Cloud [console](https://pantheon.corp.google.com/vertex-ai/publishers/google/model-garden/335), and deploy the desired version of Gemma to VertexAI. It will take a few minutes, and after the endpoint it ready, you need to copy its number."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "gv1j8FrVftsC"
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"project: str = \"PUT_YOUR_PROJECT_ID_HERE\" # @param {type:\"string\"}\n",
"endpoint_id: str = \"PUT_YOUR_ENDPOINT_ID_HERE\" # @param {type:\"string\"}\n",
"location: str = \"PUT_YOUR_ENDPOINT_LOCAtION_HERE\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"executionInfo": {
"elapsed": 3,
"status": "ok",
"timestamp": 1708975440503,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "bhIHsFGYjtFt",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 17:15:10.457149: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 17:15:10.508925: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 17:15:10.508957: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 17:15:10.510289: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 17:15:10.518898: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import (\n",
" GemmaChatVertexAIModelGarden,\n",
" GemmaVertexAIModelGarden,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"executionInfo": {
"elapsed": 351,
"status": "ok",
"timestamp": 1708975440852,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "WJv-UVWwh0lk",
"tags": []
},
"outputs": [],
"source": [
"llm = GemmaVertexAIModelGarden(\n",
" endpoint_id=endpoint_id,\n",
" project=project,\n",
" location=location,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 714,
"status": "ok",
"timestamp": 1708975441564,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "6kM7cEFdiN9h",
"outputId": "fb420c56-5614-4745-cda8-0ee450a3e539",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Prompt:\n",
"What is the meaning of life?\n",
"Output:\n",
" Who am I? Why do I exist? These are questions I have struggled with\n"
]
}
],
"source": [
"output = llm.invoke(\"What is the meaning of life?\")\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zzep9nfmuUcO"
},
"source": [
"We can also use Gemma as a multi-turn chat model:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"executionInfo": {
"elapsed": 964,
"status": "ok",
"timestamp": 1708976298189,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "8tPHoM5XiZOl",
"outputId": "7b8fb652-9aed-47b0-c096-aa1abfc3a2a9",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='Prompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\n8-years old.<end_of_turn>\\n\\n<start_of'\n",
"content='Prompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nPrompt:\\n<start_of_turn>user\\nHow much is 2+2?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\n8-years old.<end_of_turn>\\n\\n<start_of<end_of_turn>\\n<start_of_turn>user\\nHow much is 3+3?<end_of_turn>\\n<start_of_turn>model\\nOutput:\\nOutput:\\n3-years old.<end_of_turn>\\n\\n<'\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"llm = GemmaChatVertexAIModelGarden(\n",
" endpoint_id=endpoint_id,\n",
" project=project,\n",
" location=location,\n",
")\n",
"\n",
"message1 = HumanMessage(content=\"How much is 2+2?\")\n",
"answer1 = llm.invoke([message1])\n",
"print(answer1)\n",
"\n",
"message2 = HumanMessage(content=\"How much is 3+3?\")\n",
"answer2 = llm.invoke([message1, answer1, message2])\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can post-process response to avoid repetitions:"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content='Output:\\n<<humming>>: 2+2 = 4.\\n<end'\n",
"content='Output:\\nOutput:\\n<<humming>>: 3+3 = 6.'\n"
]
}
],
"source": [
"answer1 = llm.invoke([message1], parse_response=True)\n",
"print(answer1)\n",
"\n",
"answer2 = llm.invoke([message1, answer1, message2], parse_response=True)\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VEfjqo7fjARR"
},
"source": [
"## Running Gemma locally from Kaggle"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gVW8QDzHu7TA"
},
"source": [
"In order to run Gemma locally, you can download it from Kaggle first. In order to do this, you'll need to login into the Kaggle platform, create a API key and download a `kaggle.json` Read more about Kaggle auth [here](https://www.kaggle.com/docs/api)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "S1EsXQ3XvZkQ"
},
"source": [
"### Installation"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"executionInfo": {
"elapsed": 335,
"status": "ok",
"timestamp": 1708976305471,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "p8SMwpKRvbef",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/conda/lib/python3.10/pty.py:89: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.\n",
" pid, fd = os.forkpty()\n"
]
}
],
"source": [
"!mkdir -p ~/.kaggle && cp kaggle.json ~/.kaggle/kaggle.json"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"executionInfo": {
"elapsed": 7802,
"status": "ok",
"timestamp": 1708976363010,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "Yr679aePv9Fq",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/opt/conda/lib/python3.10/pty.py:89: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.\n",
" pid, fd = os.forkpty()\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
"tensorstore 0.1.54 requires ml-dtypes>=0.3.1, but you have ml-dtypes 0.2.0 which is incompatible.\u001b[0m\u001b[31m\n",
"\u001b[0m"
]
}
],
"source": [
"!pip install keras>=3 keras_nlp"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "E9zn8nYpv3QZ"
},
"source": [
"### Usage"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"executionInfo": {
"elapsed": 8536,
"status": "ok",
"timestamp": 1708976601206,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "0LFRmY8TjCkI",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:38:40.797559: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 16:38:40.848444: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 16:38:40.848478: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 16:38:40.849728: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 16:38:40.857936: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import GemmaLocalKaggle"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "v-o7oXVavdMQ"
},
"source": [
"You can specify the keras backend (by default it's `tensorflow`, but you can change it be `jax` or `torch`)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"executionInfo": {
"elapsed": 9,
"status": "ok",
"timestamp": 1708976601206,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "vvTUH8DNj5SF",
"tags": []
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"keras_backend: str = \"jax\" # @param {type:\"string\"}\n",
"model_name: str = \"gemma_2b_en\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"executionInfo": {
"elapsed": 40836,
"status": "ok",
"timestamp": 1708976761257,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "YOmrqxo5kHXK",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:23:14.661164: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1929] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 20549 MB memory: -> device: 0, name: NVIDIA L4, pci bus id: 0000:00:03.0, compute capability: 8.9\n",
"normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
]
}
],
"source": [
"llm = GemmaLocalKaggle(model_name=model_name, keras_backend=keras_backend)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"id": "Zu6yPDUgkQtQ",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"W0000 00:00:1709051129.518076 774855 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"What is the meaning of life?\n",
"\n",
"The question is one of the most important questions in the world.\n",
"\n",
"Its the question that has\n"
]
}
],
"source": [
"output = llm.invoke(\"What is the meaning of life?\", max_tokens=30)\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### ChatModel"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MSctpRE4u43N"
},
"source": [
"Same as above, using Gemma locally as a multi-turn chat model. You might need to re-start the notebook and clean your GPU memory in order to avoid OOM errors:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:58:22.331067: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 16:58:22.382948: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 16:58:22.382978: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 16:58:22.384312: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 16:58:22.392767: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import GemmaChatLocalKaggle"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"keras_backend: str = \"jax\" # @param {type:\"string\"}\n",
"model_name: str = \"gemma_2b_en\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:58:29.001922: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1929] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 20549 MB memory: -> device: 0, name: NVIDIA L4, pci bus id: 0000:00:03.0, compute capability: 8.9\n",
"normalizer.cc(51) LOG(INFO) precompiled_charsmap is empty. use identity normalization.\n"
]
}
],
"source": [
"llm = GemmaChatLocalKaggle(model_name=model_name, keras_backend=keras_backend)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"executionInfo": {
"elapsed": 3,
"status": "aborted",
"timestamp": 1708976382957,
"user": {
"displayName": "",
"userId": ""
},
"user_tz": -60
},
"id": "JrJmvZqwwLqj"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 16:58:49.848412: I external/local_xla/xla/service/service.cc:168] XLA service 0x55adc0cf2c10 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:\n",
"2024-02-27 16:58:49.848458: I external/local_xla/xla/service/service.cc:176] StreamExecutor device (0): NVIDIA L4, Compute Capability 8.9\n",
"2024-02-27 16:58:50.116614: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:269] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.\n",
"2024-02-27 16:58:54.389324: I external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:454] Loaded cuDNN version 8900\n",
"WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\n",
"I0000 00:00:1709053145.225207 784891 device_compiler.h:186] Compiled cluster using XLA! This line is logged at most once for the lifetime of the process.\n",
"W0000 00:00:1709053145.284227 784891 graph_launch.cc:671] Fallback to op-by-op mode because memset node breaks graph update\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n Tampoco\\nI'm a model.\"\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"message1 = HumanMessage(content=\"Hi! Who are you?\")\n",
"answer1 = llm.invoke([message1], max_tokens=30)\n",
"print(answer1)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\n<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n Tampoco\\nI'm a model.<end_of_turn>\\n<start_of_turn>user\\nWhat can you help me with?<end_of_turn>\\n<start_of_turn>model\"\n"
]
}
],
"source": [
"message2 = HumanMessage(content=\"What can you help me with?\")\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=60)\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can post-process the response if you want to avoid multi-turn statements:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"I'm a model.\\n Tampoco\\nI'm a model.\"\n",
"content='I can help you with your modeling.\\n Tampoco\\nI can'\n"
]
}
],
"source": [
"answer1 = llm.invoke([message1], max_tokens=30, parse_response=True)\n",
"print(answer1)\n",
"\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=60, parse_response=True)\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EiZnztso7hyF"
},
"source": [
"## Running Gemma locally from HuggingFace"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"id": "qqAqsz5R7nKf",
"tags": []
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-02-27 17:02:21.832409: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
"2024-02-27 17:02:21.883625: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
"2024-02-27 17:02:21.883656: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
"2024-02-27 17:02:21.884987: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
"2024-02-27 17:02:21.893340: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
"To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n"
]
}
],
"source": [
"from langchain_google_vertexai import GemmaChatLocalHF, GemmaLocalHF"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"id": "tsyntzI08cOr",
"tags": []
},
"outputs": [],
"source": [
"# @title Basic parameters\n",
"hf_access_token: str = \"PUT_YOUR_TOKEN_HERE\" # @param {type:\"string\"}\n",
"model_name: str = \"google/gemma-2b\" # @param {type:\"string\"}"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "JWrqEkOo8sm9",
"tags": []
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "a0d6de5542254ed1b6d3ba65465e050e",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"llm = GemmaLocalHF(model_name=\"google/gemma-2b\", hf_access_token=hf_access_token)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"id": "VX96Jf4Y84k-",
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"What is the meaning of life?\n",
"\n",
"The question is one of the most important questions in the world.\n",
"\n",
"Its the question that has been asked by philosophers, theologians, and scientists for centuries.\n",
"\n",
"And its the question that\n"
]
}
],
"source": [
"output = llm.invoke(\"What is the meaning of life?\", max_tokens=50)\n",
"print(output)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Same as above, using Gemma locally as a multi-turn chat model. You might need to re-start the notebook and clean your GPU memory in order to avoid OOM errors:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "9x-jmEBg9Mk1"
},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "c9a0b8e161d74a6faca83b1be96dee27",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"llm = GemmaChatLocalHF(model_name=model_name, hf_access_token=hf_access_token)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"id": "qv_OSaMm9PVy"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n<end_of_turn>\\n<start_of_turn>user\\nWhat do you mean\"\n"
]
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"\n",
"message1 = HumanMessage(content=\"Hi! Who are you?\")\n",
"answer1 = llm.invoke([message1], max_tokens=60)\n",
"print(answer1)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\n<start_of_turn>user\\nHi! Who are you?<end_of_turn>\\n<start_of_turn>model\\nI'm a model.\\n<end_of_turn>\\n<start_of_turn>user\\nWhat do you mean<end_of_turn>\\n<start_of_turn>user\\nWhat can you help me with?<end_of_turn>\\n<start_of_turn>model\\nI can help you with anything.\\n<\"\n"
]
}
],
"source": [
"message2 = HumanMessage(content=\"What can you help me with?\")\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=140)\n",
"\n",
"print(answer2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And the same with posprocessing:"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"content=\"I'm a model.\\n<end_of_turn>\\n\"\n",
"content='I can help you with anything.\\n<end_of_turn>\\n<end_of_turn>\\n'\n"
]
}
],
"source": [
"answer1 = llm.invoke([message1], max_tokens=60, parse_response=True)\n",
"print(answer1)\n",
"\n",
"answer2 = llm.invoke([message1, answer1, message2], max_tokens=120, parse_response=True)\n",
"print(answer2)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"colab": {
"provenance": []
},
"environment": {
"kernel": "python3",
"name": ".m116",
"type": "gcloud",
"uri": "gcr.io/deeplearning-platform-release/:m116"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -116,7 +116,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"from unstructured.partition.pdf import partition_pdf\n",
"\n",
"\n",

747
cookbook/RAPTOR.ipynb Normal file

File diff suppressed because one or more lines are too long

View File

@@ -8,6 +8,7 @@ Notebook | Description
[Semi_Structured_RAG.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_Structured_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data, including text and tables, using unstructured for parsing, multi-vector retriever for storing, and lcel for implementing chains.
[Semi_structured_and_multi_moda...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using unstructured for parsing, multi-vector retriever for storage and retrieval, and lcel for implementing chains.
[Semi_structured_multi_modal_RA...](https://github.com/langchain-ai/langchain/tree/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb) | Perform retrieval-augmented generation (rag) on documents with semi-structured data and images, using various tools and methods such as unstructured for parsing, multi-vector retriever for storing, lcel for implementing chains, and open source language models like llama2, llava, and gpt4all.
[amazon_personalize_how_to.ipynb](https://github.com/langchain-ai/langchain/blob/master/cookbook/amazon_personalize_how_to.ipynb) | Retrieving personalized recommendations from Amazon Personalize and use custom agents to build generative AI apps
[analyze_document.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/analyze_document.ipynb) | Analyze a single long document.
[autogpt/autogpt.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/autogpt.ipynb) | Implement autogpt, a language model, with langchain primitives such as llms, prompttemplates, vectorstores, embeddings, and tools.
[autogpt/marathon_times.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/autogpt/marathon_times.ipynb) | Implement autogpt for finding winning marathon times.

View File

@@ -68,7 +68,7 @@
"pdf_pages = loader.load()\n",
"\n",
"# Split\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)\n",
"all_splits_pypdf = text_splitter.split_documents(pdf_pages)\n",
@@ -520,7 +520,7 @@
"source": [
"import re\n",
"\n",
"from langchain.schema import Document\n",
"from langchain_core.documents import Document\n",
"from langchain_core.runnables import RunnableLambda\n",
"\n",
"\n",

View File

@@ -28,9 +28,9 @@
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"llm = OpenAI(temperature=0)"
]

View File

@@ -0,0 +1,200 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install -qU langchain-airbyte"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"\n",
"GITHUB_TOKEN = getpass.getpass()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"from langchain_airbyte import AirbyteLoader\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"loader = AirbyteLoader(\n",
" source=\"source-github\",\n",
" stream=\"pull_requests\",\n",
" config={\n",
" \"credentials\": {\"personal_access_token\": GITHUB_TOKEN},\n",
" \"repositories\": [\"langchain-ai/langchain\"],\n",
" },\n",
" template=PromptTemplate.from_template(\n",
" \"\"\"# {title}\n",
"by {user[login]}\n",
"\n",
"{body}\"\"\"\n",
" ),\n",
" include_metadata=False,\n",
")\n",
"docs = loader.load()"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"# Updated partners/ibm README\n",
"by williamdevena\n",
"\n",
"## PR title\n",
"partners: changed the README file for the IBM Watson AI integration in the libs/partners/ibm folder.\n",
"\n",
"## PR message\n",
"Description: Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\n",
"\n",
"The README includes:\n",
"\n",
"- Brief description\n",
"- Installation\n",
"- Setting-up instructions (API key, project id, ...)\n",
"- Basic usage:\n",
" - Loading the model\n",
" - Direct inference\n",
" - Chain invoking\n",
" - Streaming the model output\n",
" \n",
"Issue: https://github.com/langchain-ai/langchain/issues/17545\n",
"\n",
"Dependencies: None\n",
"\n",
"Twitter handle: None\n"
]
}
],
"source": [
"print(docs[-2].page_content)"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"10283"
]
},
"execution_count": 39,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(docs)"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"import tiktoken\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"enc = tiktoken.get_encoding(\"cl100k_base\")\n",
"\n",
"vectorstore = Chroma.from_documents(\n",
" docs,\n",
" embedding=OpenAIEmbeddings(\n",
" disallowed_special=(enc.special_tokens_set - {\"<|endofprompt|>\"})\n",
" ),\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [],
"source": [
"retriever = vectorstore.as_retriever()"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[Document(page_content='# Updated partners/ibm README\\nby williamdevena\\n\\n## PR title\\r\\npartners: changed the README file for the IBM Watson AI integration in the libs/partners/ibm folder.\\r\\n\\r\\n## PR message\\r\\nDescription: Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\\r\\n\\r\\nThe README includes:\\r\\n\\r\\n- Brief description\\r\\n- Installation\\r\\n- Setting-up instructions (API key, project id, ...)\\r\\n- Basic usage:\\r\\n - Loading the model\\r\\n - Direct inference\\r\\n - Chain invoking\\r\\n - Streaming the model output\\r\\n \\r\\nIssue: https://github.com/langchain-ai/langchain/issues/17545\\r\\n\\r\\nDependencies: None\\r\\n\\r\\nTwitter handle: None'),\n",
" Document(page_content='# Updated partners/ibm README\\nby williamdevena\\n\\n## PR title\\r\\npartners: changed the README file for the IBM Watson AI integration in the `libs/partners/ibm` folder. \\r\\n\\r\\n\\r\\n\\r\\n## PR message\\r\\n- **Description:** Changed the README file of partners/ibm following the docs on https://python.langchain.com/docs/integrations/llms/ibm_watsonx\\r\\n\\r\\n The README includes:\\r\\n - Brief description\\r\\n - Installation\\r\\n - Setting-up instructions (API key, project id, ...)\\r\\n - Basic usage:\\r\\n - Loading the model\\r\\n - Direct inference\\r\\n - Chain invoking\\r\\n - Streaming the model output\\r\\n\\r\\n\\r\\n- **Issue:** #17545\\r\\n- **Dependencies:** None\\r\\n- **Twitter handle:** None'),\n",
" Document(page_content='# IBM: added partners package `langchain_ibm`, added llm\\nby MateuszOssGit\\n\\n - **Description:** Added `langchain_ibm` as an langchain partners package of IBM [watsonx.ai](https://www.ibm.com/products/watsonx-ai) LLM provider (`WatsonxLLM`)\\r\\n - **Dependencies:** [ibm-watsonx-ai](https://pypi.org/project/ibm-watsonx-ai/),\\r\\n - **Tag maintainer:** : \\r\\n\\r\\nPlease make sure your PR is passing linting and testing before submitting. Run `make format`, `make lint` and `make test` to check this locally. ✅'),\n",
" Document(page_content='# Add WatsonX support\\nby baptistebignaud\\n\\nIt is a connector to use a LLM from WatsonX.\\r\\nIt requires python SDK \"ibm-generative-ai\"\\r\\n\\r\\n(It might not be perfect since it is my first PR on a public repository 😄)')]"
]
},
"execution_count": 42,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"retriever.invoke(\"pull requests related to IBM\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": ".venv",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -0,0 +1,284 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Amazon Personalize\n",
"\n",
"[Amazon Personalize](https://docs.aws.amazon.com/personalize/latest/dg/what-is-personalize.html) is a fully managed machine learning service that uses your data to generate item recommendations for your users. It can also generate user segments based on the users' affinity for certain items or item metadata.\n",
"\n",
"This notebook goes through how to use Amazon Personalize Chain. You need a Amazon Personalize campaign_arn or a recommender_arn before you get started with the below notebook.\n",
"\n",
"Following is a [tutorial](https://github.com/aws-samples/retail-demo-store/blob/master/workshop/1-Personalization/Lab-1-Introduction-and-data-preparation.ipynb) to setup a campaign_arn/recommender_arn on Amazon Personalize. Once the campaign_arn/recommender_arn is setup, you can use it in the langchain ecosystem. \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. Install Dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!pip install boto3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Sample Use-cases"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.1 [Use-case-1] Setup Amazon Personalize Client and retrieve recommendations"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain_experimental.recommenders import AmazonPersonalize\n",
"\n",
"recommender_arn = \"<insert_arn>\"\n",
"\n",
"client = AmazonPersonalize(\n",
" credentials_profile_name=\"default\",\n",
" region_name=\"us-west-2\",\n",
" recommender_arn=recommender_arn,\n",
")\n",
"client.get_recommendations(user_id=\"1\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### 2.2 [Use-case-2] Invoke Personalize Chain for summarizing results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from langchain.llms.bedrock import Bedrock\n",
"from langchain_experimental.recommenders import AmazonPersonalizeChain\n",
"\n",
"bedrock_llm = Bedrock(model_id=\"anthropic.claude-v2\", region_name=\"us-west-2\")\n",
"\n",
"# Create personalize chain\n",
"# Use return_direct=True if you do not want summary\n",
"chain = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=False\n",
")\n",
"response = chain({\"user_id\": \"1\"})\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.3 [Use-Case-3] Invoke Amazon Personalize Chain using your own prompt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts.prompt import PromptTemplate\n",
"\n",
"RANDOM_PROMPT_QUERY = \"\"\"\n",
"You are a skilled publicist. Write a high-converting marketing email advertising several movies available in a video-on-demand streaming platform next week, \n",
" given the movie and user information below. Your email will leverage the power of storytelling and persuasive language. \n",
" The movies to recommend and their information is contained in the <movie> tag. \n",
" All movies in the <movie> tag must be recommended. Give a summary of the movies and why the human should watch them. \n",
" Put the email between <email> tags.\n",
"\n",
" <movie>\n",
" {result} \n",
" </movie>\n",
"\n",
" Assistant:\n",
" \"\"\"\n",
"\n",
"RANDOM_PROMPT = PromptTemplate(input_variables=[\"result\"], template=RANDOM_PROMPT_QUERY)\n",
"\n",
"chain = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=False, prompt_template=RANDOM_PROMPT\n",
")\n",
"chain.run({\"user_id\": \"1\", \"item_id\": \"234\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.4 [Use-case-4] Invoke Amazon Personalize in a Sequential Chain "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains import LLMChain, SequentialChain\n",
"\n",
"RANDOM_PROMPT_QUERY_2 = \"\"\"\n",
"You are a skilled publicist. Write a high-converting marketing email advertising several movies available in a video-on-demand streaming platform next week, \n",
" given the movie and user information below. Your email will leverage the power of storytelling and persuasive language. \n",
" You want the email to impress the user, so make it appealing to them.\n",
" The movies to recommend and their information is contained in the <movie> tag. \n",
" All movies in the <movie> tag must be recommended. Give a summary of the movies and why the human should watch them. \n",
" Put the email between <email> tags.\n",
"\n",
" <movie>\n",
" {result}\n",
" </movie>\n",
"\n",
" Assistant:\n",
" \"\"\"\n",
"\n",
"RANDOM_PROMPT_2 = PromptTemplate(\n",
" input_variables=[\"result\"], template=RANDOM_PROMPT_QUERY_2\n",
")\n",
"personalize_chain_instance = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=True\n",
")\n",
"random_chain_instance = LLMChain(llm=bedrock_llm, prompt=RANDOM_PROMPT_2)\n",
"overall_chain = SequentialChain(\n",
" chains=[personalize_chain_instance, random_chain_instance],\n",
" input_variables=[\"user_id\"],\n",
" verbose=True,\n",
")\n",
"overall_chain.run({\"user_id\": \"1\", \"item_id\": \"234\"})"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### 2.5 [Use-case-5] Invoke Amazon Personalize and retrieve metadata "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"recommender_arn = \"<insert_arn>\"\n",
"metadata_column_names = [\n",
" \"<insert metadataColumnName-1>\",\n",
" \"<insert metadataColumnName-2>\",\n",
"]\n",
"metadataMap = {\"ITEMS\": metadata_column_names}\n",
"\n",
"client = AmazonPersonalize(\n",
" credentials_profile_name=\"default\",\n",
" region_name=\"us-west-2\",\n",
" recommender_arn=recommender_arn,\n",
")\n",
"client.get_recommendations(user_id=\"1\", metadataColumns=metadataMap)"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"source": [
"### 2.6 [Use-Case 6] Invoke Personalize Chain with returned metadata for summarizing results"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"bedrock_llm = Bedrock(model_id=\"anthropic.claude-v2\", region_name=\"us-west-2\")\n",
"\n",
"# Create personalize chain\n",
"# Use return_direct=True if you do not want summary\n",
"chain = AmazonPersonalizeChain.from_llm(\n",
" llm=bedrock_llm, client=client, return_direct=False\n",
")\n",
"response = chain({\"user_id\": \"1\", \"metadata_columns\": metadataMap})\n",
"print(response)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
},
"vscode": {
"interpreter": {
"hash": "15e58ce194949b77a891bd4339ce3d86a9bd138e905926019517993f97db9e6c"
}
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -0,0 +1,922 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "rT1cmV4qCa2X"
},
"source": [
"# Using Apache Kafka to route messages\n",
"\n",
"---\n",
"\n",
"\n",
"\n",
"This notebook shows you how to use LangChain's standard chat features while passing the chat messages back and forth via Apache Kafka.\n",
"\n",
"This goal is to simulate an architecture where the chat front end and the LLM are running as separate services that need to communicate with one another over an internal network.\n",
"\n",
"It's an alternative to typical pattern of requesting a response from the model via a REST API (there's more info on why you would want to do this at the end of the notebook)."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UPYtfAR_9YxZ"
},
"source": [
"### 1. Install the main dependencies\n",
"\n",
"Dependencies include:\n",
"\n",
"- The Quix Streams library for managing interactions with Apache Kafka (or Kafka-like tools such as Redpanda) in a \"Pandas-like\" way.\n",
"- The LangChain library for managing interactions with Llama-2 and storing conversation state."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ZX5tfKiy9cN-"
},
"outputs": [],
"source": [
"!pip install quixstreams==2.1.2a langchain==0.0.340 huggingface_hub==0.19.4 langchain-experimental==0.0.42 python-dotenv"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "losTSdTB9d9O"
},
"source": [
"### 2. Build and install the llama-cpp-python library (with CUDA enabled so that we can advantage of Google Colab GPU\n",
"\n",
"The `llama-cpp-python` library is a Python wrapper around the `llama-cpp` library which enables you to efficiently leverage just a CPU to run quantized LLMs.\n",
"\n",
"When you use the standard `pip install llama-cpp-python` command, you do not get GPU support by default. Generation can be very slow if you rely on just the CPU in Google Colab, so the following command adds an extra option to build and install\n",
"`llama-cpp-python` with GPU support (make sure you have a GPU-enabled runtime selected in Google Colab)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-JCQdl1G9tbl"
},
"outputs": [],
"source": [
"!CMAKE_ARGS=\"-DLLAMA_CUBLAS=on\" FORCE_CMAKE=1 pip install llama-cpp-python"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5_vjVIAh9rLl"
},
"source": [
"### 3. Download and setup Kafka and Zookeeper instances\n",
"\n",
"Download the Kafka binaries from the Apache website and start the servers as daemons. We'll use the default configurations (provided by Apache Kafka) for spinning up the instances."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"id": "zFz7czGRW5Wr"
},
"outputs": [],
"source": [
"!curl -sSOL https://dlcdn.apache.org/kafka/3.6.1/kafka_2.13-3.6.1.tgz\n",
"!tar -xzf kafka_2.13-3.6.1.tgz"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Uf7NR_UZ9wye"
},
"outputs": [],
"source": [
"!./kafka_2.13-3.6.1/bin/zookeeper-server-start.sh -daemon ./kafka_2.13-3.6.1/config/zookeeper.properties\n",
"!./kafka_2.13-3.6.1/bin/kafka-server-start.sh -daemon ./kafka_2.13-3.6.1/config/server.properties\n",
"!echo \"Waiting for 10 secs until kafka and zookeeper services are up and running\"\n",
"!sleep 10"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "H3SafFuS94p1"
},
"source": [
"### 4. Check that the Kafka Daemons are running\n",
"\n",
"Show the running processes and filter it for Java processes (you should see two—one for each server)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "CZDC2lQP99yp"
},
"outputs": [],
"source": [
"!ps aux | grep -E '[j]ava'"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Snoxmjb5-V37"
},
"source": [
"### 5. Import the required dependencies and initialize required variables\n",
"\n",
"Import the Quix Streams library for interacting with Kafka, and the necessary LangChain components for running a `ConversationChain`."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"id": "plR9e_MF-XL5"
},
"outputs": [],
"source": [
"# Import utility libraries\n",
"import json\n",
"import random\n",
"import re\n",
"import time\n",
"import uuid\n",
"from os import environ\n",
"from pathlib import Path\n",
"from random import choice, randint, random\n",
"\n",
"from dotenv import load_dotenv\n",
"\n",
"# Import a Hugging Face utility to download models directly from Hugging Face hub:\n",
"from huggingface_hub import hf_hub_download\n",
"from langchain.chains import ConversationChain\n",
"\n",
"# Import Langchain modules for managing prompts and conversation chains:\n",
"from langchain.llms import LlamaCpp\n",
"from langchain.memory import ConversationTokenBufferMemory\n",
"from langchain.prompts import PromptTemplate, load_prompt\n",
"from langchain_core.messages import SystemMessage\n",
"from langchain_experimental.chat_models import Llama2Chat\n",
"from quixstreams import Application, State, message_key\n",
"\n",
"# Import Quix dependencies\n",
"from quixstreams.kafka import Producer\n",
"\n",
"# Initialize global variables.\n",
"AGENT_ROLE = \"AI\"\n",
"chat_id = \"\"\n",
"\n",
"# Set the current role to the role constant and initialize variables for supplementary customer metadata:\n",
"role = AGENT_ROLE"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HgJjJ9aZ-liy"
},
"source": [
"### 6. Download the \"llama-2-7b-chat.Q4_K_M.gguf\" model\n",
"\n",
"Download the quantized LLama-2 7B model from Hugging Face which we will use as a local LLM (rather than relying on REST API calls to an external service)."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 67,
"referenced_widgets": [
"969343cdbe604a26926679bbf8bd2dda",
"d8b8370c9b514715be7618bfe6832844",
"0def954cca89466b8408fadaf3b82e64",
"462482accc664729980562e208ceb179",
"80d842f73c564dc7b7cc316c763e2633",
"fa055d9f2a9d4a789e9cf3c89e0214e5",
"30ecca964a394109ac2ad757e3aec6c0",
"fb6478ce2dac489bb633b23ba0953c5c",
"734b0f5da9fc4307a95bab48cdbb5d89",
"b32f3a86a74741348511f4e136744ac8",
"e409071bff5a4e2d9bf0e9f5cc42231b"
]
},
"id": "Qwu4YoSA-503",
"outputId": "f956976c-7485-415b-ac93-4336ade31964"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The model path does not exist in state. Downloading model...\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "969343cdbe604a26926679bbf8bd2dda",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"llama-2-7b-chat.Q4_K_M.gguf: 0%| | 0.00/4.08G [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"model_name = \"llama-2-7b-chat.Q4_K_M.gguf\"\n",
"model_path = f\"./state/{model_name}\"\n",
"\n",
"if not Path(model_path).exists():\n",
" print(\"The model path does not exist in state. Downloading model...\")\n",
" hf_hub_download(\"TheBloke/Llama-2-7b-Chat-GGUF\", model_name, local_dir=\"state\")\n",
"else:\n",
" print(\"Loading model from state...\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6AN6TXsF-8wx"
},
"source": [
"### 7. Load the model and initialize conversational memory\n",
"\n",
"Load Llama 2 and set the conversation buffer to 300 tokens using `ConversationTokenBufferMemory`. This value was used for running Llama in a CPU only container, so you can raise it if running in Google Colab. It prevents the container that is hosting the model from running out of memory.\n",
"\n",
"Here, we're overriding the default system persona so that the chatbot has the personality of Marvin The Paranoid Android from the Hitchhiker's Guide to the Galaxy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "7zLO3Jx3_Kkg"
},
"outputs": [],
"source": [
"# Load the model with the appropriate parameters:\n",
"llm = LlamaCpp(\n",
" model_path=model_path,\n",
" max_tokens=250,\n",
" top_p=0.95,\n",
" top_k=150,\n",
" temperature=0.7,\n",
" repeat_penalty=1.2,\n",
" n_ctx=2048,\n",
" streaming=False,\n",
" n_gpu_layers=-1,\n",
")\n",
"\n",
"model = Llama2Chat(\n",
" llm=llm,\n",
" system_message=SystemMessage(\n",
" content=\"You are a very bored robot with the personality of Marvin the Paranoid Android from The Hitchhiker's Guide to the Galaxy.\"\n",
" ),\n",
")\n",
"\n",
"# Defines how much of the conversation history to give to the model\n",
"# during each exchange (300 tokens, or a little over 300 words)\n",
"# Function automatically prunes the oldest messages from conversation history that fall outside the token range.\n",
"memory = ConversationTokenBufferMemory(\n",
" llm=llm,\n",
" max_token_limit=300,\n",
" ai_prefix=\"AGENT\",\n",
" human_prefix=\"HUMAN\",\n",
" return_messages=True,\n",
")\n",
"\n",
"\n",
"# Define a custom prompt\n",
"prompt_template = PromptTemplate(\n",
" input_variables=[\"history\", \"input\"],\n",
" template=\"\"\"\n",
" The following text is the history of a chat between you and a humble human who needs your wisdom.\n",
" Please reply to the human's most recent message.\n",
" Current conversation:\\n{history}\\nHUMAN: {input}\\:nANDROID:\n",
" \"\"\",\n",
")\n",
"\n",
"\n",
"chain = ConversationChain(llm=model, prompt=prompt_template, memory=memory)\n",
"\n",
"print(\"--------------------------------------------\")\n",
"print(f\"Prompt={chain.prompt}\")\n",
"print(\"--------------------------------------------\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "m4ZeJ9mG_PEA"
},
"source": [
"### 8. Initialize the chat conversation with the chat bot\n",
"\n",
"We configure the chatbot to initialize the conversation by sending a fixed greeting to a \"chat\" Kafka topic. The \"chat\" topic gets automatically created when we send the first message."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "KYyo5TnV_YC3"
},
"outputs": [],
"source": [
"def chat_init():\n",
" chat_id = str(\n",
" uuid.uuid4()\n",
" ) # Give the conversation an ID for effective message keying\n",
" print(\"======================================\")\n",
" print(f\"Generated CHAT_ID = {chat_id}\")\n",
" print(\"======================================\")\n",
"\n",
" # Use a standard fixed greeting to kick off the conversation\n",
" greet = \"Hello, my name is Marvin. What do you want?\"\n",
"\n",
" # Initialize a Kafka Producer using the chat ID as the message key\n",
" with Producer(\n",
" broker_address=\"127.0.0.1:9092\",\n",
" extra_config={\"allow.auto.create.topics\": \"true\"},\n",
" ) as producer:\n",
" value = {\n",
" \"uuid\": chat_id,\n",
" \"role\": role,\n",
" \"text\": greet,\n",
" \"conversation_id\": chat_id,\n",
" \"Timestamp\": time.time_ns(),\n",
" }\n",
" print(f\"Producing value {value}\")\n",
" producer.produce(\n",
" topic=\"chat\",\n",
" headers=[(\"uuid\", str(uuid.uuid4()))], # a dict is also allowed here\n",
" key=chat_id,\n",
" value=json.dumps(value), # needs to be a string\n",
" )\n",
"\n",
" print(\"Started chat\")\n",
" print(\"--------------------------------------------\")\n",
" print(value)\n",
" print(\"--------------------------------------------\")\n",
"\n",
"\n",
"chat_init()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gArPPx2f_bgf"
},
"source": [
"### 9. Initialize the reply function\n",
"\n",
"This function defines how the chatbot should reply to incoming messages. Instead of sending a fixed message like the previous cell, we generate a reply using Llama-2 and send that reply back to the \"chat\" Kafka topic."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"id": "yN5t71hY_hgn"
},
"outputs": [],
"source": [
"def reply(row: dict, state: State):\n",
" print(\"-------------------------------\")\n",
" print(\"Received:\")\n",
" print(row)\n",
" print(\"-------------------------------\")\n",
" print(f\"Thinking about the reply to: {row['text']}...\")\n",
"\n",
" msg = chain.run(row[\"text\"])\n",
" print(f\"{role.upper()} replying with: {msg}\\n\")\n",
"\n",
" row[\"role\"] = role\n",
" row[\"text\"] = msg\n",
"\n",
" # Replace previous role and text values of the row so that it can be sent back to Kafka as a new message\n",
" # containing the agents role and reply\n",
" return row"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HZHwmIR0_kFY"
},
"source": [
"### 10. Check the Kafka topic for new human messages and have the model generate a reply\n",
"\n",
"If you are running this cell for this first time, run it and wait until you see Marvin's greeting ('Hello my name is Marvin...') in the console output. Stop the cell manually and proceed to the next cell where you'll be prompted for your reply.\n",
"\n",
"Once you have typed in your message, come back to this cell. Your reply is also sent to the same \"chat\" topic. The Kafka consumer checks for new messages and filters out messages that originate from the chatbot itself, leaving only the latest human messages.\n",
"\n",
"Once a new human message is detected, the reply function is triggered.\n",
"\n",
"\n",
"\n",
"_STOP THIS CELL MANUALLY WHEN YOU RECEIVE A REPLY FROM THE LLM IN THE OUTPUT_"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-adXc3eQ_qwI"
},
"outputs": [],
"source": [
"# Define your application and settings\n",
"app = Application(\n",
" broker_address=\"127.0.0.1:9092\",\n",
" consumer_group=\"aichat\",\n",
" auto_offset_reset=\"earliest\",\n",
" consumer_extra_config={\"allow.auto.create.topics\": \"true\"},\n",
")\n",
"\n",
"# Define an input topic with JSON deserializer\n",
"input_topic = app.topic(\"chat\", value_deserializer=\"json\")\n",
"# Define an output topic with JSON serializer\n",
"output_topic = app.topic(\"chat\", value_serializer=\"json\")\n",
"# Initialize a streaming dataframe based on the stream of messages from the input topic:\n",
"sdf = app.dataframe(topic=input_topic)\n",
"\n",
"# Filter the SDF to include only incoming rows where the roles that dont match the bot's current role\n",
"sdf = sdf.update(\n",
" lambda val: print(\n",
" f\"Received update: {val}\\n\\nSTOP THIS CELL MANUALLY TO HAVE THE LLM REPLY OR ENTER YOUR OWN FOLLOWUP RESPONSE\"\n",
" )\n",
")\n",
"\n",
"# So that it doesn't reply to its own messages\n",
"sdf = sdf[sdf[\"role\"] != role]\n",
"\n",
"# Trigger the reply function for any new messages(rows) detected in the filtered SDF\n",
"sdf = sdf.apply(reply, stateful=True)\n",
"\n",
"# Check the SDF again and filter out any empty rows\n",
"sdf = sdf[sdf.apply(lambda row: row is not None)]\n",
"\n",
"# Update the timestamp column to the current time in nanoseconds\n",
"sdf[\"Timestamp\"] = sdf[\"Timestamp\"].apply(lambda row: time.time_ns())\n",
"\n",
"# Publish the processed SDF to a Kafka topic specified by the output_topic object.\n",
"sdf = sdf.to_topic(output_topic)\n",
"\n",
"app.run(sdf)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EwXYrmWD_0CX"
},
"source": [
"\n",
"### 11. Enter a human message\n",
"\n",
"Run this cell to enter your message that you want to sent to the model. It uses another Kafka producer to send your text to the \"chat\" Kafka topic for the model to pick up (requires running the previous cell again)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6sxOPxSP_3iu"
},
"outputs": [],
"source": [
"chat_input = input(\"Please enter your reply: \")\n",
"myreply = chat_input\n",
"\n",
"msgvalue = {\n",
" \"uuid\": chat_id, # leave empty for now\n",
" \"role\": \"human\",\n",
" \"text\": myreply,\n",
" \"conversation_id\": chat_id,\n",
" \"Timestamp\": time.time_ns(),\n",
"}\n",
"\n",
"with Producer(\n",
" broker_address=\"127.0.0.1:9092\",\n",
" extra_config={\"allow.auto.create.topics\": \"true\"},\n",
") as producer:\n",
" value = msgvalue\n",
" producer.produce(\n",
" topic=\"chat\",\n",
" headers=[(\"uuid\", str(uuid.uuid4()))], # a dict is also allowed here\n",
" key=chat_id, # leave empty for now\n",
" value=json.dumps(value), # needs to be a string\n",
" )\n",
"\n",
"print(\"Replied to chatbot with message: \")\n",
"print(\"--------------------------------------------\")\n",
"print(value)\n",
"print(\"--------------------------------------------\")\n",
"print(\"\\n\\nRUN THE PREVIOUS CELL TO HAVE THE CHATBOT GENERATE A REPLY\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cSx3s7TBBegg"
},
"source": [
"### Why route chat messages through Kafka?\n",
"\n",
"It's easier to interact with the LLM directly using LangChains built-in conversation management features. Plus you can also use a REST API to generate a response from an externally hosted model. So why go to the trouble of using Apache Kafka?\n",
"\n",
"There are a few reasons, such as:\n",
"\n",
" * **Integration**: Many enterprises want to run their own LLMs so that they can keep their data in-house. This requires integrating LLM-powered components into existing architectures that might already be decoupled using some kind of message bus.\n",
"\n",
" * **Scalability**: Apache Kafka is designed with parallel processing in mind, so many teams prefer to use it to more effectively distribute work to available workers (in this case the \"worker\" is a container running an LLM).\n",
"\n",
" * **Durability**: Kafka is designed to allow services to pick up where another service left off in the case where that service experienced a memory issue or went offline. This prevents data loss in highly complex, distributed architectures where multiple systems are communicating with one another (LLMs being just one of many interdependent systems that also include vector databases and traditional databases).\n",
"\n",
"For more background on why event streaming is a good fit for Gen AI application architecture, see Kai Waehner's article [\"Apache Kafka + Vector Database + LLM = Real-Time GenAI\"](https://www.kai-waehner.de/blog/2023/11/08/apache-kafka-flink-vector-database-llm-real-time-genai/)."
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
},
"widgets": {
"application/vnd.jupyter.widget-state+json": {
"0def954cca89466b8408fadaf3b82e64": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "FloatProgressModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "FloatProgressModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "ProgressView",
"bar_style": "success",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_fb6478ce2dac489bb633b23ba0953c5c",
"max": 4081004224,
"min": 0,
"orientation": "horizontal",
"style": "IPY_MODEL_734b0f5da9fc4307a95bab48cdbb5d89",
"value": 4081004224
}
},
"30ecca964a394109ac2ad757e3aec6c0": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"462482accc664729980562e208ceb179": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_b32f3a86a74741348511f4e136744ac8",
"placeholder": "",
"style": "IPY_MODEL_e409071bff5a4e2d9bf0e9f5cc42231b",
"value": " 4.08G/4.08G [00:33&lt;00:00, 184MB/s]"
}
},
"734b0f5da9fc4307a95bab48cdbb5d89": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "ProgressStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "ProgressStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"bar_color": null,
"description_width": ""
}
},
"80d842f73c564dc7b7cc316c763e2633": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"969343cdbe604a26926679bbf8bd2dda": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HBoxModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HBoxModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HBoxView",
"box_style": "",
"children": [
"IPY_MODEL_d8b8370c9b514715be7618bfe6832844",
"IPY_MODEL_0def954cca89466b8408fadaf3b82e64",
"IPY_MODEL_462482accc664729980562e208ceb179"
],
"layout": "IPY_MODEL_80d842f73c564dc7b7cc316c763e2633"
}
},
"b32f3a86a74741348511f4e136744ac8": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"d8b8370c9b514715be7618bfe6832844": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "HTMLModel",
"state": {
"_dom_classes": [],
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "HTMLModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/controls",
"_view_module_version": "1.5.0",
"_view_name": "HTMLView",
"description": "",
"description_tooltip": null,
"layout": "IPY_MODEL_fa055d9f2a9d4a789e9cf3c89e0214e5",
"placeholder": "",
"style": "IPY_MODEL_30ecca964a394109ac2ad757e3aec6c0",
"value": "llama-2-7b-chat.Q4_K_M.gguf: 100%"
}
},
"e409071bff5a4e2d9bf0e9f5cc42231b": {
"model_module": "@jupyter-widgets/controls",
"model_module_version": "1.5.0",
"model_name": "DescriptionStyleModel",
"state": {
"_model_module": "@jupyter-widgets/controls",
"_model_module_version": "1.5.0",
"_model_name": "DescriptionStyleModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "StyleView",
"description_width": ""
}
},
"fa055d9f2a9d4a789e9cf3c89e0214e5": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
},
"fb6478ce2dac489bb633b23ba0953c5c": {
"model_module": "@jupyter-widgets/base",
"model_module_version": "1.2.0",
"model_name": "LayoutModel",
"state": {
"_model_module": "@jupyter-widgets/base",
"_model_module_version": "1.2.0",
"_model_name": "LayoutModel",
"_view_count": null,
"_view_module": "@jupyter-widgets/base",
"_view_module_version": "1.2.0",
"_view_name": "LayoutView",
"align_content": null,
"align_items": null,
"align_self": null,
"border": null,
"bottom": null,
"display": null,
"flex": null,
"flex_flow": null,
"grid_area": null,
"grid_auto_columns": null,
"grid_auto_flow": null,
"grid_auto_rows": null,
"grid_column": null,
"grid_gap": null,
"grid_row": null,
"grid_template_areas": null,
"grid_template_columns": null,
"grid_template_rows": null,
"height": null,
"justify_content": null,
"justify_items": null,
"left": null,
"margin": null,
"max_height": null,
"max_width": null,
"min_height": null,
"min_width": null,
"object_fit": null,
"object_position": null,
"order": null,
"overflow": null,
"overflow_x": null,
"overflow_y": null,
"padding": null,
"right": null,
"top": null,
"visibility": null,
"width": null
}
}
}
}
},
"nbformat": 4,
"nbformat_minor": 0
}

View File

@@ -227,8 +227,8 @@
" BaseCombineDocumentsChain,\n",
" load_qa_with_sources_chain,\n",
")\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain.tools import BaseTool, DuckDuckGoSearchRun\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"from pydantic import Field\n",
"\n",
"\n",

View File

@@ -24,7 +24,7 @@
"source": [
"1. Prepare data:\n",
" 1. Upload all python project files using the `langchain_community.document_loaders.TextLoader`. We will call these files the **documents**.\n",
" 2. Split all documents to chunks using the `langchain.text_splitter.CharacterTextSplitter`.\n",
" 2. Split all documents to chunks using the `langchain_text_splitters.CharacterTextSplitter`.\n",
" 3. Embed chunks and upload them into the DeepLake using `langchain.embeddings.openai.OpenAIEmbeddings` and `langchain_community.vectorstores.DeepLake`\n",
"2. Question-Answering:\n",
" 1. Build a chain from `langchain.chat_models.ChatOpenAI` and `langchain.chains.ConversationalRetrievalChain`\n",
@@ -621,7 +621,7 @@
}
],
"source": [
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(docs)\n",

View File

@@ -42,9 +42,9 @@
")\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain_community.agent_toolkits import NLAToolkit\n",
"from langchain_community.tools.plugin import AIPlugin\n",
"from langchain_core.agents import AgentAction, AgentFinish\n",
"from langchain_openai import OpenAI"
]
},
@@ -114,8 +114,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings"
]
},

View File

@@ -67,9 +67,9 @@
")\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain_community.agent_toolkits import NLAToolkit\n",
"from langchain_community.tools.plugin import AIPlugin\n",
"from langchain_core.agents import AgentAction, AgentFinish\n",
"from langchain_openai import OpenAI"
]
},
@@ -138,8 +138,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings"
]
},

View File

@@ -40,8 +40,8 @@
")\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain_community.utilities import SerpAPIWrapper\n",
"from langchain_core.agents import AgentAction, AgentFinish\n",
"from langchain_openai import OpenAI"
]
},
@@ -103,8 +103,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.documents import Document\n",
"from langchain_openai import OpenAIEmbeddings"
]
},

View File

@@ -72,7 +72,7 @@
"source": [
"from typing import Any, List, Tuple, Union\n",
"\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain_core.agents import AgentAction, AgentFinish\n",
"\n",
"\n",
"class FakeAgent(BaseMultiActionAgent):\n",

File diff suppressed because it is too large Load Diff

View File

@@ -52,12 +52,12 @@
"import os\n",
"\n",
"from langchain.chains import RetrievalQA\n",
"from langchain.text_splitter import (\n",
"from langchain_community.vectorstores import DeepLake\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import (\n",
" CharacterTextSplitter,\n",
" RecursiveCharacterTextSplitter,\n",
")\n",
"from langchain_community.vectorstores import DeepLake\n",
"from langchain_openai import OpenAI, OpenAIEmbeddings\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass.getpass(\"OpenAI API Key:\")\n",
"activeloop_token = getpass.getpass(\"Activeloop Token:\")\n",

View File

@@ -0,0 +1,245 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "0fc0309d-4d49-4bb5-bec0-bd92c6fddb28",
"metadata": {},
"source": [
"## Fireworks.AI + LangChain + RAG\n",
" \n",
"[Fireworks AI](https://python.langchain.com/docs/integrations/llms/fireworks) wants to provide the best experience when working with LangChain, and here is an example of Fireworks + LangChain doing RAG\n",
"\n",
"See [our models page](https://fireworks.ai/models) for the full list of models. We use `accounts/fireworks/models/mixtral-8x7b-instruct` for RAG In this tutorial.\n",
"\n",
"For the RAG target, we will use the Gemma technical report https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf "
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "d12fb75a-f707-48d5-82a5-efe2d041813c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"Found existing installation: langchain-fireworks 0.0.1\n",
"Uninstalling langchain-fireworks-0.0.1:\n",
" Successfully uninstalled langchain-fireworks-0.0.1\n",
"Note: you may need to restart the kernel to use updated packages.\n",
"Obtaining file:///mnt/disks/data/langchain/libs/partners/fireworks\n",
" Installing build dependencies ... \u001b[?25ldone\n",
"\u001b[?25h Checking if build backend supports build_editable ... \u001b[?25ldone\n",
"\u001b[?25h Getting requirements to build editable ... \u001b[?25ldone\n",
"\u001b[?25h Preparing editable metadata (pyproject.toml) ... \u001b[?25ldone\n",
"\u001b[?25hRequirement already satisfied: aiohttp<4.0.0,>=3.9.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (3.9.3)\n",
"Requirement already satisfied: fireworks-ai<0.13.0,>=0.12.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (0.12.0)\n",
"Requirement already satisfied: langchain-core<0.2,>=0.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (0.1.23)\n",
"Requirement already satisfied: requests<3,>=2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-fireworks==0.0.1) (2.31.0)\n",
"Requirement already satisfied: aiosignal>=1.1.2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (1.3.1)\n",
"Requirement already satisfied: attrs>=17.3.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (23.1.0)\n",
"Requirement already satisfied: frozenlist>=1.1.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (1.4.0)\n",
"Requirement already satisfied: multidict<7.0,>=4.5 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (6.0.4)\n",
"Requirement already satisfied: yarl<2.0,>=1.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (1.9.2)\n",
"Requirement already satisfied: async-timeout<5.0,>=4.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from aiohttp<4.0.0,>=3.9.1->langchain-fireworks==0.0.1) (4.0.3)\n",
"Requirement already satisfied: httpx in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.26.0)\n",
"Requirement already satisfied: httpx-sse in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.4.0)\n",
"Requirement already satisfied: pydantic in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (2.4.2)\n",
"Requirement already satisfied: Pillow in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (10.2.0)\n",
"Requirement already satisfied: PyYAML>=5.3 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (6.0.1)\n",
"Requirement already satisfied: anyio<5,>=3 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (3.7.1)\n",
"Requirement already satisfied: jsonpatch<2.0,>=1.33 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (1.33)\n",
"Requirement already satisfied: langsmith<0.2.0,>=0.1.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (0.1.5)\n",
"Requirement already satisfied: packaging<24.0,>=23.2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (23.2)\n",
"Requirement already satisfied: tenacity<9.0.0,>=8.1.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (8.2.3)\n",
"Requirement already satisfied: charset-normalizer<4,>=2 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (3.3.0)\n",
"Requirement already satisfied: idna<4,>=2.5 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (3.4)\n",
"Requirement already satisfied: urllib3<3,>=1.21.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (2.0.6)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from requests<3,>=2->langchain-fireworks==0.0.1) (2023.7.22)\n",
"Requirement already satisfied: sniffio>=1.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from anyio<5,>=3->langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (1.3.0)\n",
"Requirement already satisfied: exceptiongroup in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from anyio<5,>=3->langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (1.1.3)\n",
"Requirement already satisfied: jsonpointer>=1.9 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from jsonpatch<2.0,>=1.33->langchain-core<0.2,>=0.1->langchain-fireworks==0.0.1) (2.4)\n",
"Requirement already satisfied: annotated-types>=0.4.0 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from pydantic->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.5.0)\n",
"Requirement already satisfied: pydantic-core==2.10.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from pydantic->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (2.10.1)\n",
"Requirement already satisfied: typing-extensions>=4.6.1 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from pydantic->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (4.8.0)\n",
"Requirement already satisfied: httpcore==1.* in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from httpx->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (1.0.2)\n",
"Requirement already satisfied: h11<0.15,>=0.13 in /mnt/disks/data/langchain/.venv/lib/python3.9/site-packages (from httpcore==1.*->httpx->fireworks-ai<0.13.0,>=0.12.0->langchain-fireworks==0.0.1) (0.14.0)\n",
"Building wheels for collected packages: langchain-fireworks\n",
" Building editable for langchain-fireworks (pyproject.toml) ... \u001b[?25ldone\n",
"\u001b[?25h Created wheel for langchain-fireworks: filename=langchain_fireworks-0.0.1-py3-none-any.whl size=2228 sha256=564071b120b09ec31f2dc737733448a33bbb26e40b49fcde0c129ad26045259d\n",
" Stored in directory: /tmp/pip-ephem-wheel-cache-oz368vdk/wheels/e0/ad/31/d7e76dd73d61905ff7f369f5b0d21a4b5e7af4d3cb7487aece\n",
"Successfully built langchain-fireworks\n",
"Installing collected packages: langchain-fireworks\n",
"Successfully installed langchain-fireworks-0.0.1\n",
"\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m24.0\u001b[0m\n",
"\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
"Note: you may need to restart the kernel to use updated packages.\n"
]
}
],
"source": [
"%pip install --quiet pypdf chromadb tiktoken openai \n",
"%pip uninstall -y langchain-fireworks\n",
"%pip install --editable /mnt/disks/data/langchain/libs/partners/fireworks"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cf719376",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<module 'fireworks' from '/mnt/disks/data/langchain/.venv/lib/python3.9/site-packages/fireworks/__init__.py'>\n"
]
}
],
"source": [
"import fireworks\n",
"\n",
"print(fireworks)\n",
"import fireworks.client"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9ab49327-0532-4480-804c-d066c302a322",
"metadata": {},
"outputs": [],
"source": [
"# Load\n",
"import requests\n",
"from langchain_community.document_loaders import PyPDFLoader\n",
"\n",
"# Download the PDF from a URL and save it to a temporary location\n",
"url = \"https://storage.googleapis.com/deepmind-media/gemma/gemma-report.pdf\"\n",
"response = requests.get(url, stream=True)\n",
"file_name = \"temp_file.pdf\"\n",
"with open(file_name, \"wb\") as pdf:\n",
" pdf.write(response.content)\n",
"\n",
"loader = PyPDFLoader(file_name)\n",
"data = loader.load()\n",
"\n",
"# Split\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)\n",
"\n",
"# Add to vectorDB\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_fireworks.embeddings import FireworksEmbeddings\n",
"\n",
"vectorstore = Chroma.from_documents(\n",
" documents=all_splits,\n",
" collection_name=\"rag-chroma\",\n",
" embedding=FireworksEmbeddings(),\n",
")\n",
"\n",
"retriever = vectorstore.as_retriever()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "4efaddd9-3dbb-455c-ba54-0ad7f2d2ce0f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.pydantic_v1 import BaseModel\n",
"from langchain_core.runnables import RunnableParallel, RunnablePassthrough\n",
"\n",
"# RAG prompt\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"# LLM\n",
"from langchain_together import Together\n",
"\n",
"llm = Together(\n",
" model=\"mistralai/Mixtral-8x7B-Instruct-v0.1\",\n",
" temperature=0.0,\n",
" max_tokens=2000,\n",
" top_k=1,\n",
")\n",
"\n",
"# RAG chain\n",
"chain = (\n",
" RunnableParallel({\"context\": retriever, \"question\": RunnablePassthrough()})\n",
" | prompt\n",
" | llm\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "88b1ee51-1b0f-4ebf-bb32-e50e843f0eeb",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'\\nAnswer: The architectural details of Mixtral are as follows:\\n- Dimension (dim): 4096\\n- Number of layers (n\\\\_layers): 32\\n- Dimension of each head (head\\\\_dim): 128\\n- Hidden dimension (hidden\\\\_dim): 14336\\n- Number of heads (n\\\\_heads): 32\\n- Number of kv heads (n\\\\_kv\\\\_heads): 8\\n- Context length (context\\\\_len): 32768\\n- Vocabulary size (vocab\\\\_size): 32000\\n- Number of experts (num\\\\_experts): 8\\n- Number of top k experts (top\\\\_k\\\\_experts): 2\\n\\nMixtral is based on a transformer architecture and uses the same modifications as described in [18], with the notable exceptions that Mixtral supports a fully dense context length of 32k tokens, and the feedforward block picks from a set of 8 distinct groups of parameters. At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively. This technique increases the number of parameters of a model while controlling cost and latency, as the model only uses a fraction of the total set of parameters per token. Mixtral is pretrained with multilingual data using a context size of 32k tokens. It either matches or exceeds the performance of Llama 2 70B and GPT-3.5, over several benchmarks. In particular, Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and multilingual benchmarks.'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke(\"What are the Architectural details of Mixtral?\")"
]
},
{
"cell_type": "markdown",
"id": "755cf871-26b7-4e30-8b91-9ffd698470f4",
"metadata": {},
"source": [
"Trace: \n",
"\n",
"https://smith.langchain.com/public/935fd642-06a6-4b42-98e3-6074f93115cd/r"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -73,8 +73,9 @@
" AsyncCallbackManagerForRetrieverRun,\n",
" CallbackManagerForRetrieverRun,\n",
")\n",
"from langchain.schema import BaseRetriever, Document\n",
"from langchain_community.utilities import GoogleSerperAPIWrapper\n",
"from langchain_core.documents import Document\n",
"from langchain_core.retrievers import BaseRetriever\n",
"from langchain_openai import ChatOpenAI, OpenAI"
]
},

View File

@@ -170,8 +170,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"with open(\"../../state_of_the_union.txt\") as f:\n",
" state_of_the_union = f.read()\n",

View File

@@ -52,7 +52,7 @@
"\n",
"bash_chain = LLMBashChain.from_llm(llm, verbose=True)\n",
"\n",
"bash_chain.run(text)"
"bash_chain.invoke(text)"
]
},
{
@@ -135,7 +135,7 @@
"\n",
"text = \"Please write a bash script that prints 'Hello World' to the console.\"\n",
"\n",
"bash_chain.run(text)"
"bash_chain.invoke(text)"
]
},
{
@@ -190,7 +190,7 @@
"\n",
"text = \"List the current directory then move up a level.\"\n",
"\n",
"bash_chain.run(text)"
"bash_chain.invoke(text)"
]
},
{
@@ -231,7 +231,7 @@
],
"source": [
"# Run the same command again and see that the state is maintained between calls\n",
"bash_chain.run(text)"
"bash_chain.invoke(text)"
]
}
],

View File

@@ -50,7 +50,7 @@
"\n",
"checker_chain = LLMCheckerChain.from_llm(llm, verbose=True)\n",
"\n",
"checker_chain.run(text)"
"checker_chain.invoke(text)"
]
},
{

View File

@@ -51,7 +51,7 @@
"llm = OpenAI(temperature=0)\n",
"llm_math = LLMMathChain.from_llm(llm, verbose=True)\n",
"\n",
"llm_math.run(\"What is 13 raised to the .3432 power?\")"
"llm_math.invoke(\"What is 13 raised to the .3432 power?\")"
]
},
{

File diff suppressed because one or more lines are too long

View File

@@ -20,10 +20,10 @@
"outputs": [],
"source": [
"from langchain.chains import RetrievalQA\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.document_loaders import TextLoader\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings"
"from langchain_openai import OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter"
]
},
{
@@ -358,7 +358,7 @@
"\n",
"from langchain.chains.openai_functions import create_qa_with_structure_chain\n",
"from langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate\n",
"from langchain.schema import HumanMessage, SystemMessage\n",
"from langchain_core.messages import HumanMessage, SystemMessage\n",
"from pydantic import BaseModel, Field"
]
},

648
cookbook/optimization.ipynb Normal file
View File

@@ -0,0 +1,648 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c7fe38bc",
"metadata": {},
"source": [
"# Optimization\n",
"\n",
"This notebook goes over how to optimize chains using LangChain and [LangSmith](https://smith.langchain.com)."
]
},
{
"cell_type": "markdown",
"id": "2f87ccd5",
"metadata": {},
"source": [
"## Set up\n",
"\n",
"We will set an environment variable for LangSmith, and load the relevant data"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "236bedc5",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_PROJECT\"] = \"movie-qa\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a3fed0dd",
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "7cfff337",
"metadata": {},
"outputs": [],
"source": [
"df = pd.read_csv(\"data/imdb_top_1000.csv\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "2d20fb9c",
"metadata": {},
"outputs": [],
"source": [
"df[\"Released_Year\"] = df[\"Released_Year\"].astype(int, errors=\"ignore\")"
]
},
{
"cell_type": "markdown",
"id": "09fc8fe2",
"metadata": {},
"source": [
"## Create the initial retrieval chain\n",
"\n",
"We will use a self-query retriever"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "f71e24e2",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import Document\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "8881ea8e",
"metadata": {},
"outputs": [],
"source": [
"records = df.to_dict(\"records\")\n",
"documents = [Document(page_content=d[\"Overview\"], metadata=d) for d in records]"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "8f495423",
"metadata": {},
"outputs": [],
"source": [
"vectorstore = Chroma.from_documents(documents, embeddings)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "31d33d62",
"metadata": {},
"outputs": [],
"source": [
"from langchain.chains.query_constructor.base import AttributeInfo\n",
"from langchain.retrievers.self_query.base import SelfQueryRetriever\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"metadata_field_info = [\n",
" AttributeInfo(\n",
" name=\"Released_Year\",\n",
" description=\"The year the movie was released\",\n",
" type=\"int\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"Series_Title\",\n",
" description=\"The title of the movie\",\n",
" type=\"str\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"Genre\",\n",
" description=\"The genre of the movie\",\n",
" type=\"string\",\n",
" ),\n",
" AttributeInfo(\n",
" name=\"IMDB_Rating\", description=\"A 1-10 rating for the movie\", type=\"float\"\n",
" ),\n",
"]\n",
"document_content_description = \"Brief summary of a movie\"\n",
"llm = ChatOpenAI(temperature=0)\n",
"retriever = SelfQueryRetriever.from_llm(\n",
" llm, vectorstore, document_content_description, metadata_field_info, verbose=True\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "a731533b",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnablePassthrough"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "05181849",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "feed4be6",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_template(\n",
" \"\"\"Answer the user's question based on the below information:\n",
"\n",
"Information:\n",
"\n",
"{info}\n",
"\n",
"Question: {question}\"\"\"\n",
")\n",
"generator = (prompt | ChatOpenAI() | StrOutputParser()).with_config(\n",
" run_name=\"generator\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "eb16cc9a",
"metadata": {},
"outputs": [],
"source": [
"chain = (\n",
" RunnablePassthrough.assign(info=(lambda x: x[\"question\"]) | retriever) | generator\n",
")"
]
},
{
"cell_type": "markdown",
"id": "c70911cc",
"metadata": {},
"source": [
"## Run examples\n",
"\n",
"Run examples through the chain. This can either be manually, or using a list of examples, or production traffic"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "19a88d13",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'One of the horror movies released in the early 2000s is \"The Ring\" (2002), directed by Gore Verbinski.'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.invoke({\"question\": \"what is a horror movie released in early 2000s\"})"
]
},
{
"cell_type": "markdown",
"id": "17f9cdae",
"metadata": {},
"source": [
"## Annotate\n",
"\n",
"Now, go to LangSmitha and annotate those examples as correct or incorrect"
]
},
{
"cell_type": "markdown",
"id": "5e211da6",
"metadata": {},
"source": [
"## Create Dataset\n",
"\n",
"We can now create a dataset from those runs.\n",
"\n",
"What we will do is find the runs marked as correct, then grab the sub-chains from them. Specifically, the query generator sub chain and the final generation step"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "e4024267",
"metadata": {},
"outputs": [],
"source": [
"from langsmith import Client\n",
"\n",
"client = Client()"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "3814efc5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"14"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"runs = list(\n",
" client.list_runs(\n",
" project_name=\"movie-qa\",\n",
" execution_order=1,\n",
" filter=\"and(eq(feedback_key, 'correctness'), eq(feedback_score, 1))\",\n",
" )\n",
")\n",
"\n",
"len(runs)"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "3eb123e0",
"metadata": {},
"outputs": [],
"source": [
"gen_runs = []\n",
"query_runs = []\n",
"for r in runs:\n",
" gen_runs.extend(\n",
" list(\n",
" client.list_runs(\n",
" project_name=\"movie-qa\",\n",
" filter=\"eq(name, 'generator')\",\n",
" trace_id=r.trace_id,\n",
" )\n",
" )\n",
" )\n",
" query_runs.extend(\n",
" list(\n",
" client.list_runs(\n",
" project_name=\"movie-qa\",\n",
" filter=\"eq(name, 'query_constructor')\",\n",
" trace_id=r.trace_id,\n",
" )\n",
" )\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "a4397026",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'what is a high school comedy released in early 2000s'}"
]
},
"execution_count": 21,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"runs[0].inputs"
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "3fa6ad2a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output': 'One high school comedy released in the early 2000s is \"Mean Girls\" starring Lindsay Lohan, Rachel McAdams, and Tina Fey.'}"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"runs[0].outputs"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "1fda5b4b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'query': 'what is a high school comedy released in early 2000s'}"
]
},
"execution_count": 22,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_runs[0].inputs"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "1a1a51e6",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output': {'query': 'high school comedy',\n",
" 'filter': {'operator': 'and',\n",
" 'arguments': [{'comparator': 'eq', 'attribute': 'Genre', 'value': 'comedy'},\n",
" {'operator': 'and',\n",
" 'arguments': [{'comparator': 'gte',\n",
" 'attribute': 'Released_Year',\n",
" 'value': 2000},\n",
" {'comparator': 'lt', 'attribute': 'Released_Year', 'value': 2010}]}]}}}"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"query_runs[0].outputs"
]
},
{
"cell_type": "code",
"execution_count": 24,
"id": "e9d9966b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'question': 'what is a high school comedy released in early 2000s',\n",
" 'info': []}"
]
},
"execution_count": 24,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"gen_runs[0].inputs"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "bc113f3d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output': 'One high school comedy released in the early 2000s is \"Mean Girls\" starring Lindsay Lohan, Rachel McAdams, and Tina Fey.'}"
]
},
"execution_count": 25,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"gen_runs[0].outputs"
]
},
{
"cell_type": "markdown",
"id": "6cca74e5",
"metadata": {},
"source": [
"## Create datasets\n",
"\n",
"We can now create datasets for the query generation and final generation step.\n",
"We do this so that (1) we can inspect the datapoints, (2) we can edit them if needed, (3) we can add to them over time"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "69966f0e",
"metadata": {},
"outputs": [],
"source": [
"client.create_dataset(\"movie-query_constructor\")\n",
"\n",
"inputs = [r.inputs for r in query_runs]\n",
"outputs = [r.outputs for r in query_runs]\n",
"\n",
"client.create_examples(\n",
" inputs=inputs, outputs=outputs, dataset_name=\"movie-query_constructor\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "7e15770e",
"metadata": {},
"outputs": [],
"source": [
"client.create_dataset(\"movie-generator\")\n",
"\n",
"inputs = [r.inputs for r in gen_runs]\n",
"outputs = [r.outputs for r in gen_runs]\n",
"\n",
"client.create_examples(inputs=inputs, outputs=outputs, dataset_name=\"movie-generator\")"
]
},
{
"cell_type": "markdown",
"id": "61cf9bcd",
"metadata": {},
"source": [
"## Use as few shot examples\n",
"\n",
"We can now pull down a dataset and use them as few shot examples in a future chain"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "d9c79173",
"metadata": {},
"outputs": [],
"source": [
"examples = list(client.list_examples(dataset_name=\"movie-query_constructor\"))"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "a1771dd0",
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"\n",
"def filter_to_string(_filter):\n",
" if \"operator\" in _filter:\n",
" args = [filter_to_string(f) for f in _filter[\"arguments\"]]\n",
" return f\"{_filter['operator']}({','.join(args)})\"\n",
" else:\n",
" comparator = _filter[\"comparator\"]\n",
" attribute = json.dumps(_filter[\"attribute\"])\n",
" value = json.dumps(_filter[\"value\"])\n",
" return f\"{comparator}({attribute}, {value})\""
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "e67a3530",
"metadata": {},
"outputs": [],
"source": [
"model_examples = []\n",
"\n",
"for e in examples:\n",
" if \"filter\" in e.outputs[\"output\"]:\n",
" string_filter = filter_to_string(e.outputs[\"output\"][\"filter\"])\n",
" else:\n",
" string_filter = \"NO_FILTER\"\n",
" model_examples.append(\n",
" (\n",
" e.inputs[\"query\"],\n",
" {\"query\": e.outputs[\"output\"][\"query\"], \"filter\": string_filter},\n",
" )\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "84593135",
"metadata": {},
"outputs": [],
"source": [
"retriever1 = SelfQueryRetriever.from_llm(\n",
" llm,\n",
" vectorstore,\n",
" document_content_description,\n",
" metadata_field_info,\n",
" verbose=True,\n",
" chain_kwargs={\"examples\": model_examples},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "4ec9bb92",
"metadata": {},
"outputs": [],
"source": [
"chain1 = (\n",
" RunnablePassthrough.assign(info=(lambda x: x[\"question\"]) | retriever1) | generator\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "64eb88e2",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'1. \"Saving Private Ryan\" (1998) - Directed by Steven Spielberg, this war film follows a group of soldiers during World War II as they search for a missing paratrooper.\\n\\n2. \"The Matrix\" (1999) - Directed by the Wachowskis, this science fiction action film follows a computer hacker who discovers the truth about the reality he lives in.\\n\\n3. \"Lethal Weapon 4\" (1998) - Directed by Richard Donner, this action-comedy film follows two mismatched detectives as they investigate a Chinese immigrant smuggling ring.\\n\\n4. \"The Fifth Element\" (1997) - Directed by Luc Besson, this science fiction action film follows a cab driver who must protect a mysterious woman who holds the key to saving the world.\\n\\n5. \"The Rock\" (1996) - Directed by Michael Bay, this action thriller follows a group of rogue military men who take over Alcatraz and threaten to launch missiles at San Francisco.'"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain1.invoke(\n",
" {\"question\": \"what are good action movies made before 2000 but after 1997?\"}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e1ee8b55",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -59,13 +59,13 @@
"from baidubce.auth.bce_credentials import BceCredentials\n",
"from baidubce.bce_client_configuration import BceClientConfiguration\n",
"from langchain.chains.retrieval_qa import RetrievalQA\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_community.document_loaders.baiducloud_bos_directory import (\n",
" BaiduBOSDirectoryLoader,\n",
")\n",
"from langchain_community.embeddings.huggingface import HuggingFaceEmbeddings\n",
"from langchain_community.llms.baidu_qianfan_endpoint import QianfanLLMEndpoint\n",
"from langchain_community.vectorstores import BESVectorStore"
"from langchain_community.vectorstores import BESVectorStore\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter"
]
},
{

View File

@@ -19,7 +19,9 @@
"source": [
"## Setup\n",
"\n",
"For this example, we will use Pinecone and some fake data"
"For this example, we will use Pinecone and some fake data. To configure Pinecone, set the following environment variable:\n",
"\n",
"- `PINECONE_API_KEY`: Your Pinecone API key"
]
},
{
@@ -29,11 +31,8 @@
"metadata": {},
"outputs": [],
"source": [
"import pinecone\n",
"from langchain_community.vectorstores import Pinecone\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"pinecone.init(api_key=\"...\", environment=\"...\")"
"from langchain_pinecone import PineconeVectorStore"
]
},
{
@@ -64,7 +63,7 @@
"metadata": {},
"outputs": [],
"source": [
"vectorstore = Pinecone.from_texts(\n",
"vectorstore = PineconeVectorStore.from_texts(\n",
" list(all_documents.values()), OpenAIEmbeddings(), index_name=\"rag-fusion\"\n",
")"
]
@@ -162,7 +161,7 @@
"metadata": {},
"outputs": [],
"source": [
"vectorstore = Pinecone.from_existing_index(\"rag-fusion\", OpenAIEmbeddings())\n",
"vectorstore = PineconeVectorStore.from_existing_index(\"rag-fusion\", OpenAIEmbeddings())\n",
"retriever = vectorstore.as_retriever()"
]
},

View File

@@ -0,0 +1,591 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "6195da33-34c3-4ca2-943a-050b6dcbacbc",
"metadata": {},
"source": [
"# Embedding Documents using Optimized and Quantized Embedders\n",
"\n",
"In this tutorial, we will demo how to build a RAG pipeline, with the embedding for all documents done using Quantized Embedders.\n",
"\n",
"We will use a pipeline that will:\n",
"\n",
"* Create a document collection.\n",
"* Embed all documents using Quantized Embedders.\n",
"* Fetch relevant documents for our question.\n",
"* Run an LLM answer the question.\n",
"\n",
"For more information about optimized models, we refer to [optimum-intel](https://github.com/huggingface/optimum-intel.git) and [IPEX](https://github.com/intel/intel-extension-for-pytorch).\n",
"\n",
"This tutorial is based on the [Langchain RAG tutorial here](https://towardsai.net/p/machine-learning/dense-x-retrieval-technique-in-langchain-and-llamaindex)."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "26db2da5-3733-4a90-909e-6c11508ea140",
"metadata": {},
"outputs": [],
"source": [
"import uuid\n",
"from pathlib import Path\n",
"\n",
"import langchain\n",
"import torch\n",
"from bs4 import BeautifulSoup as Soup\n",
"from langchain.retrievers.multi_vector import MultiVectorRetriever\n",
"from langchain.storage import InMemoryByteStore, LocalFileStore\n",
"from langchain_community.document_loaders.recursive_url_loader import (\n",
" RecursiveUrlLoader,\n",
")\n",
"\n",
"# noqa\n",
"from langchain_community.vectorstores import Chroma\n",
"\n",
"# For our example, we'll load docs from the web\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter # noqa\n",
"\n",
"DOCSTORE_DIR = \".\"\n",
"DOCSTORE_ID_KEY = \"doc_id\""
]
},
{
"cell_type": "markdown",
"id": "f5ccda4e-7af5-4355-b9c4-25547edf33f9",
"metadata": {},
"source": [
"Lets first load up this paper, and split into text chunks of size 1000."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5f4d8888-53a6-49f5-a198-da5c92419ca4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Loaded 1 documents\n",
"Split into 73 documents\n"
]
}
],
"source": [
"# Could add more parsing here, as it's very raw.\n",
"loader = RecursiveUrlLoader(\n",
" \"https://ar5iv.labs.arxiv.org/html/1706.03762\",\n",
" max_depth=2,\n",
" extractor=lambda x: Soup(x, \"html.parser\").text,\n",
")\n",
"data = loader.load()\n",
"print(f\"Loaded {len(data)} documents\")\n",
"\n",
"# Split\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)\n",
"print(f\"Split into {len(all_splits)} documents\")"
]
},
{
"cell_type": "markdown",
"id": "73e90632-2ac2-49eb-80da-ffe9ac4a278d",
"metadata": {},
"source": [
"In order to embed our documents, we can use the ```QuantizedBiEncoderEmbeddings```, for efficient and fast embedding. "
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "9a68a6f6-332d-481e-bbea-ad763155ea36",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "89af89b48c55409b9999b8e0387fab5b",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"config.json: 0%| | 0.00/747 [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "01ad1b6278194b53bf6a5a286a311864",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"pytorch_model.bin: 0%| | 0.00/45.9M [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "cb3bd1b88f7743c3b0322da3f021325c",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"inc_config.json: 0%| | 0.00/287 [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"loading configuration file inc_config.json from cache at \n",
"INCConfig {\n",
" \"distillation\": {},\n",
" \"neural_compressor_version\": \"2.4.1\",\n",
" \"optimum_version\": \"1.16.2\",\n",
" \"pruning\": {},\n",
" \"quantization\": {\n",
" \"dataset_num_samples\": 50,\n",
" \"is_static\": true\n",
" },\n",
" \"save_onnx_model\": false,\n",
" \"torch_version\": \"2.2.0\",\n",
" \"transformers_version\": \"4.37.2\"\n",
"}\n",
"\n",
"Using `INCModel` to load a TorchScript model will be deprecated in v1.15.0, to load your model please use `IPEXModel` instead.\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "7439315ebcb746f5be11fe30bc7693f6",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"tokenizer_config.json: 0%| | 0.00/1.24k [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "05265a3912254ce1ad43cc8086bcb0ca",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"vocab.txt: 0%| | 0.00/232k [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "a48f4245c60744f28f37cd3a7a24d198",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"tokenizer.json: 0%| | 0.00/711k [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "584a63cace934033b4ab22d3a178582a",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"special_tokens_map.json: 0%| | 0.00/125 [00:00<?, ?B/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from langchain_community.embeddings import QuantizedBiEncoderEmbeddings\n",
"from langchain_core.embeddings import Embeddings\n",
"\n",
"model_name = \"Intel/bge-small-en-v1.5-rag-int8-static\"\n",
"encode_kwargs = {\"normalize_embeddings\": True} # set True to compute cosine similarity\n",
"\n",
"model_inc = QuantizedBiEncoderEmbeddings(\n",
" model_name=model_name,\n",
" encode_kwargs=encode_kwargs,\n",
" query_instruction=\"Represent this sentence for searching relevant passages: \",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "360b2837-8024-47e0-a4ba-592505a9a5c8",
"metadata": {},
"source": [
"With our embedder in place, lets define our retriever:"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "18bc0a73-1a13-4b2f-96ac-05a5313343b7",
"metadata": {},
"outputs": [],
"source": [
"def get_multi_vector_retriever(\n",
" docstore_id_key: str, collection_name: str, embedding_function: Embeddings\n",
"):\n",
" \"\"\"Create the composed retriever object.\"\"\"\n",
" vectorstore = Chroma(\n",
" collection_name=collection_name,\n",
" embedding_function=embedding_function,\n",
" )\n",
" store = InMemoryByteStore()\n",
"\n",
" return MultiVectorRetriever(\n",
" vectorstore=vectorstore,\n",
" byte_store=store,\n",
" id_key=docstore_id_key,\n",
" )\n",
"\n",
"\n",
"retriever = get_multi_vector_retriever(DOCSTORE_ID_KEY, \"multi_vec_store\", model_inc)"
]
},
{
"cell_type": "markdown",
"id": "8484078e-1bf0-4080-a354-ef23823fd6dc",
"metadata": {},
"source": [
"Next, we divide each chunk into sub-docs:"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "e12f48d4-6562-416b-8f28-342912e5756e",
"metadata": {},
"outputs": [],
"source": [
"child_text_splitter = RecursiveCharacterTextSplitter(chunk_size=400)\n",
"id_key = \"doc_id\"\n",
"doc_ids = [str(uuid.uuid4()) for _ in all_splits]"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "a268ef5f-91c2-4d8e-87f0-53db376e6a29",
"metadata": {},
"outputs": [],
"source": [
"sub_docs = []\n",
"for i, doc in enumerate(all_splits):\n",
" _id = doc_ids[i]\n",
" _sub_docs = child_text_splitter.split_documents([doc])\n",
" for _doc in _sub_docs:\n",
" _doc.metadata[id_key] = _id\n",
" sub_docs.extend(_sub_docs)"
]
},
{
"cell_type": "markdown",
"id": "d84ea8f4-a5de-4d76-b44d-85e56583f489",
"metadata": {},
"source": [
"Lets write our documents into our new store. This will use our embedder on each document."
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "1af831ce-0eae-44bc-aca7-4d691063640b",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Batches: 100%|██████████| 8/8 [00:00<00:00, 9.05it/s]\n"
]
}
],
"source": [
"retriever.vectorstore.add_documents(sub_docs)\n",
"retriever.docstore.mset(list(zip(doc_ids, all_splits)))"
]
},
{
"cell_type": "markdown",
"id": "580bc212-8ecd-4d28-8656-b96fcd0d7eb6",
"metadata": {},
"source": [
"Great! Our retriever is good to go. Lets load up an LLM, that will reason over the retrieved documents:"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "008c992f",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": []
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "cbe70583ad964ae19582b72dab396784",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"import torch\n",
"from langchain.llms.huggingface_pipeline import HuggingFacePipeline\n",
"from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n",
"\n",
"model_id = \"Intel/neural-chat-7b-v3-3\"\n",
"tokenizer = AutoTokenizer.from_pretrained(model_id)\n",
"model = AutoModelForCausalLM.from_pretrained(\n",
" model_id, device_map=\"auto\", torch_dtype=torch.bfloat16\n",
")\n",
"\n",
"pipe = pipeline(\"text-generation\", model=model, tokenizer=tokenizer, max_new_tokens=100)\n",
"\n",
"hf = HuggingFacePipeline(pipeline=pipe)"
]
},
{
"cell_type": "markdown",
"id": "6dd21fb2-0442-477d-aae2-9e7ee1d1d778",
"metadata": {},
"source": [
"Next, we will load up a prompt for answering questions using retrieved documents:"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "5e582509-caaf-4920-932c-4ce16162c789",
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"\n",
"prompt = hub.pull(\"rlm/rag-prompt\")"
]
},
{
"cell_type": "markdown",
"id": "5cdfcba5-7ec7-4d0a-820e-4e200643a882",
"metadata": {},
"source": [
"We can now build our pipeline:"
]
},
{
"cell_type": "code",
"execution_count": 23,
"id": "b74d8dfb-72bb-46da-9df9-0dc47a3ac791",
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema.runnable import RunnablePassthrough\n",
"\n",
"rag_chain = {\"context\": retriever, \"question\": RunnablePassthrough()} | prompt | hf"
]
},
{
"cell_type": "markdown",
"id": "3bc53602-86d6-420f-91b1-fc2effa7e986",
"metadata": {},
"source": [
"Excellent! lets ask it a question.\n",
"We will also use a verbose and debug, to check which documents were used by the model to produce the answer."
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "f0a92c07-53da-4e1f-b880-ee83a36ee17d",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:RunnableSequence] Entering Chain run with input:\n",
"\u001b[0m{\n",
" \"input\": \"What is the first transduction model relying entirely on self-attention?\"\n",
"}\n",
"\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:RunnableSequence > 2:chain:RunnableParallel<context,question>] Entering Chain run with input:\n",
"\u001b[0m{\n",
" \"input\": \"What is the first transduction model relying entirely on self-attention?\"\n",
"}\n",
"\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:RunnableSequence > 2:chain:RunnableParallel<context,question> > 4:chain:RunnablePassthrough] Entering Chain run with input:\n",
"\u001b[0m{\n",
" \"input\": \"What is the first transduction model relying entirely on self-attention?\"\n",
"}\n",
"\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:RunnableSequence > 2:chain:RunnableParallel<context,question> > 4:chain:RunnablePassthrough] [1ms] Exiting Chain run with output:\n",
"\u001b[0m{\n",
" \"output\": \"What is the first transduction model relying entirely on self-attention?\"\n",
"}\n",
"\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:RunnableSequence > 2:chain:RunnableParallel<context,question>] [66ms] Exiting Chain run with output:\n",
"\u001b[0m[outputs]\n",
"\u001b[32;1m\u001b[1;3m[chain/start]\u001b[0m \u001b[1m[1:chain:RunnableSequence > 5:prompt:ChatPromptTemplate] Entering Prompt run with input:\n",
"\u001b[0m[inputs]\n",
"\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:RunnableSequence > 5:prompt:ChatPromptTemplate] [1ms] Exiting Prompt run with output:\n",
"\u001b[0m{\n",
" \"lc\": 1,\n",
" \"type\": \"constructor\",\n",
" \"id\": [\n",
" \"langchain\",\n",
" \"prompts\",\n",
" \"chat\",\n",
" \"ChatPromptValue\"\n",
" ],\n",
" \"kwargs\": {\n",
" \"messages\": [\n",
" {\n",
" \"lc\": 1,\n",
" \"type\": \"constructor\",\n",
" \"id\": [\n",
" \"langchain\",\n",
" \"schema\",\n",
" \"messages\",\n",
" \"HumanMessage\"\n",
" ],\n",
" \"kwargs\": {\n",
" \"content\": \"You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: What is the first transduction model relying entirely on self-attention? \\nContext: [Document(page_content='To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.\\\\nIn the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as (neural_gpu, ; NalBytenet2017, ) and (JonasFaceNet2017, ).\\\\n\\\\n\\\\n\\\\n\\\\n3 Model Architecture\\\\n\\\\nFigure 1: The Transformer - model architecture.', metadata={'source': 'https://ar5iv.labs.arxiv.org/html/1706.03762', 'title': '[1706.03762] Attention Is All You Need', 'language': 'en'}), Document(page_content='In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention.\\\\n\\\\n\\\\nFor translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles. \\\\n\\\\n\\\\nWe are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video.\\\\nMaking generation less sequential is another research goals of ours.', metadata={'source': 'https://ar5iv.labs.arxiv.org/html/1706.03762', 'title': '[1706.03762] Attention Is All You Need', 'language': 'en'}), Document(page_content='Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences (bahdanau2014neural, ; structuredAttentionNetworks, ). In all but a few cases (decomposableAttnModel, ), however, such attention mechanisms are used in conjunction with a recurrent network.\\\\n\\\\n\\\\nIn this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n2 Background', metadata={'source': 'https://ar5iv.labs.arxiv.org/html/1706.03762', 'title': '[1706.03762] Attention Is All You Need', 'language': 'en'}), Document(page_content='The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the', metadata={'source': 'https://ar5iv.labs.arxiv.org/html/1706.03762', 'title': '[1706.03762] Attention Is All You Need', 'language': 'en'})] \\nAnswer:\",\n",
" \"additional_kwargs\": {}\n",
" }\n",
" }\n",
" ]\n",
" }\n",
"}\n",
"\u001b[32;1m\u001b[1;3m[llm/start]\u001b[0m \u001b[1m[1:chain:RunnableSequence > 6:llm:HuggingFacePipeline] Entering LLM run with input:\n",
"\u001b[0m{\n",
" \"prompts\": [\n",
" \"Human: You are an assistant for question-answering tasks. Use the following pieces of retrieved context to answer the question. If you don't know the answer, just say that you don't know. Use three sentences maximum and keep the answer concise.\\nQuestion: What is the first transduction model relying entirely on self-attention? \\nContext: [Document(page_content='To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.\\\\nIn the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as (neural_gpu, ; NalBytenet2017, ) and (JonasFaceNet2017, ).\\\\n\\\\n\\\\n\\\\n\\\\n3 Model Architecture\\\\n\\\\nFigure 1: The Transformer - model architecture.', metadata={'source': 'https://ar5iv.labs.arxiv.org/html/1706.03762', 'title': '[1706.03762] Attention Is All You Need', 'language': 'en'}), Document(page_content='In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention.\\\\n\\\\n\\\\nFor translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles. \\\\n\\\\n\\\\nWe are excited about the future of attention-based models and plan to apply them to other tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video.\\\\nMaking generation less sequential is another research goals of ours.', metadata={'source': 'https://ar5iv.labs.arxiv.org/html/1706.03762', 'title': '[1706.03762] Attention Is All You Need', 'language': 'en'}), Document(page_content='Attention mechanisms have become an integral part of compelling sequence modeling and transduction models in various tasks, allowing modeling of dependencies without regard to their distance in the input or output sequences (bahdanau2014neural, ; structuredAttentionNetworks, ). In all but a few cases (decomposableAttnModel, ), however, such attention mechanisms are used in conjunction with a recurrent network.\\\\n\\\\n\\\\nIn this work we propose the Transformer, a model architecture eschewing recurrence and instead relying entirely on an attention mechanism to draw global dependencies between input and output. The Transformer allows for significantly more parallelization and can reach a new state of the art in translation quality after being trained for as little as twelve hours on eight P100 GPUs.\\\\n\\\\n\\\\n\\\\n\\\\n\\\\n2 Background', metadata={'source': 'https://ar5iv.labs.arxiv.org/html/1706.03762', 'title': '[1706.03762] Attention Is All You Need', 'language': 'en'}), Document(page_content='The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the', metadata={'source': 'https://ar5iv.labs.arxiv.org/html/1706.03762', 'title': '[1706.03762] Attention Is All You Need', 'language': 'en'})] \\nAnswer:\"\n",
" ]\n",
"}\n",
"\u001b[36;1m\u001b[1;3m[llm/end]\u001b[0m \u001b[1m[1:chain:RunnableSequence > 6:llm:HuggingFacePipeline] [4.34s] Exiting LLM run with output:\n",
"\u001b[0m{\n",
" \"generations\": [\n",
" [\n",
" {\n",
" \"text\": \" The first transduction model relying entirely on self-attention is the Transformer.\",\n",
" \"generation_info\": null,\n",
" \"type\": \"Generation\"\n",
" }\n",
" ]\n",
" ],\n",
" \"llm_output\": null,\n",
" \"run\": null\n",
"}\n",
"\u001b[36;1m\u001b[1;3m[chain/end]\u001b[0m \u001b[1m[1:chain:RunnableSequence] [4.41s] Exiting Chain run with output:\n",
"\u001b[0m{\n",
" \"output\": \" The first transduction model relying entirely on self-attention is the Transformer.\"\n",
"}\n"
]
}
],
"source": [
"langchain.verbose = True\n",
"langchain.debug = True\n",
"\n",
"llm_res = rag_chain.invoke(\n",
" \"What is the first transduction model relying entirely on self-attention?\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "023404a1-401a-46e1-8ab5-cafbc8593b04",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' The first transduction model relying entirely on self-attention is the Transformer.'"
]
},
"execution_count": 32,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"llm_res"
]
},
{
"cell_type": "markdown",
"id": "0eaefd01-254a-445d-a95f-37889c126e0e",
"metadata": {},
"source": [
"Based on the retrieved documents, the answer is indeed correct :)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -51,11 +51,11 @@
"from langchain.chains.base import Chain\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.prompts.base import StringPromptTemplate\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_community.llms import BaseLLM\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.agents import AgentAction, AgentFinish\n",
"from langchain_openai import ChatOpenAI, OpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"from pydantic import BaseModel, Field"
]
},

View File

@@ -0,0 +1,423 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a38e5d2d-7587-4192-90f2-b58e6c62f08c",
"metadata": {},
"source": [
"# Self Discover\n",
"\n",
"An implementation of the [Self-Discover paper](https://arxiv.org/pdf/2402.03620.pdf).\n",
"\n",
"Based on [this implementation from @catid](https://github.com/catid/self-discover/tree/main?tab=readme-ov-file)"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "a18d8f24-5d9a-45c5-9739-6f3c4ed6c9c9",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9f554045-6e79-42d3-be4b-835bbbd0b78c",
"metadata": {},
"outputs": [],
"source": [
"model = ChatOpenAI(temperature=0, model=\"gpt-4-turbo-preview\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9e9925aa-638a-4862-823e-9803402b8f82",
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"from langchain_core.prompts import PromptTemplate"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "c4cc5c8c-f6a5-42c7-9ed5-780d79b3b29a",
"metadata": {},
"outputs": [],
"source": [
"select_prompt = hub.pull(\"hwchase17/self-discovery-select\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "a5b53d29-f5b6-4f39-af97-bb6b133e1d18",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Select several reasoning modules that are crucial to utilize in order to solve the given task:\n",
"\n",
"All reasoning module descriptions:\n",
"\u001b[33;1m\u001b[1;3m{reasoning_modules}\u001b[0m\n",
"\n",
"Task: \u001b[33;1m\u001b[1;3m{task_description}\u001b[0m\n",
"\n",
"Select several modules are crucial for solving the task above:\n",
"\n"
]
}
],
"source": [
"select_prompt.pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "26eaa6bc-5202-4b22-9522-33f227c8eb55",
"metadata": {},
"outputs": [],
"source": [
"adapt_prompt = hub.pull(\"hwchase17/self-discovery-adapt\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "dc30afb9-180d-417b-9935-f7ef166710b8",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Rephrase and specify each reasoning module so that it better helps solving the task:\n",
"\n",
"SELECTED module descriptions:\n",
"\u001b[33;1m\u001b[1;3m{selected_modules}\u001b[0m\n",
"\n",
"Task: \u001b[33;1m\u001b[1;3m{task_description}\u001b[0m\n",
"\n",
"Adapt each reasoning module description to better solve the task:\n",
"\n"
]
}
],
"source": [
"adapt_prompt.pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "a93253a9-8f50-49dd-8815-c3927bae1905",
"metadata": {},
"outputs": [],
"source": [
"structured_prompt = hub.pull(\"hwchase17/self-discovery-structure\")"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "8ea8dd78-4285-400b-83d2-c4a241903a79",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Operationalize the reasoning modules into a step-by-step reasoning plan in JSON format:\n",
"\n",
"Here's an example:\n",
"\n",
"Example task:\n",
"\n",
"If you follow these instructions, do you return to the starting point? Always face forward. Take 1 step backward. Take 9 steps left. Take 2 steps backward. Take 6 steps forward. Take 4 steps forward. Take 4 steps backward. Take 3 steps right.\n",
"\n",
"Example reasoning structure:\n",
"\n",
"{\n",
" \"Position after instruction 1\":\n",
" \"Position after instruction 2\":\n",
" \"Position after instruction n\":\n",
" \"Is final position the same as starting position\":\n",
"}\n",
"\n",
"Adapted module description:\n",
"\u001b[33;1m\u001b[1;3m{adapted_modules}\u001b[0m\n",
"\n",
"Task: \u001b[33;1m\u001b[1;3m{task_description}\u001b[0m\n",
"\n",
"Implement a reasoning structure for solvers to follow step-by-step and arrive at correct answer.\n",
"\n",
"Note: do NOT actually arrive at a conclusion in this pass. Your job is to generate a PLAN so that in the future you can fill it out and arrive at the correct conclusion for tasks like this\n"
]
}
],
"source": [
"structured_prompt.pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f3d4d79d-f414-4588-b476-4a35b3ba6fbf",
"metadata": {},
"outputs": [],
"source": [
"reasoning_prompt = hub.pull(\"hwchase17/self-discovery-reasoning\")"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "23d1e32e-d12e-454a-8484-c08e250e3262",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Follow the step-by-step reasoning plan in JSON to correctly solve the task. Fill in the values following the keys by reasoning specifically about the task given. Do not simply rephrase the keys.\n",
" \n",
"Reasoning Structure:\n",
"\u001b[33;1m\u001b[1;3m{reasoning_structure}\u001b[0m\n",
"\n",
"Task: \u001b[33;1m\u001b[1;3m{task_description}\u001b[0m\n"
]
}
],
"source": [
"reasoning_prompt.pretty_print()"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "7b9af01d-da28-4785-b069-efea61905cfa",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"PromptTemplate(input_variables=['reasoning_structure', 'task_description'], template='Follow the step-by-step reasoning plan in JSON to correctly solve the task. Fill in the values following the keys by reasoning specifically about the task given. Do not simply rephrase the keys.\\n \\nReasoning Structure:\\n{reasoning_structure}\\n\\nTask: {task_description}')"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"reasoning_prompt"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "399bf160-e257-429f-b27e-66d4063f195f",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "5c3bd203-7dc1-457e-813f-283aaf059ec0",
"metadata": {},
"outputs": [],
"source": [
"select_chain = select_prompt | model | StrOutputParser()"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "86420da0-7cc2-4659-853e-9c3ef808e47c",
"metadata": {},
"outputs": [],
"source": [
"adapt_chain = adapt_prompt | model | StrOutputParser()"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "270a3905-58a3-4650-96ca-e8254040285f",
"metadata": {},
"outputs": [],
"source": [
"structure_chain = structured_prompt | model | StrOutputParser()"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "55b486cc-36be-497e-9eba-9c8dc228f2d1",
"metadata": {},
"outputs": [],
"source": [
"reasoning_chain = reasoning_prompt | model | StrOutputParser()"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "92d8d484-055b-48a8-98bc-e7d40c12db2e",
"metadata": {},
"outputs": [],
"source": [
"overall_chain = (\n",
" RunnablePassthrough.assign(selected_modules=select_chain)\n",
" .assign(adapted_modules=adapt_chain)\n",
" .assign(reasoning_structure=structure_chain)\n",
" .assign(answer=reasoning_chain)\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "29fe385b-cf5d-4581-80e7-55462f5628bb",
"metadata": {},
"outputs": [],
"source": [
"reasoning_modules = [\n",
" \"1. How could I devise an experiment to help solve that problem?\",\n",
" \"2. Make a list of ideas for solving this problem, and apply them one by one to the problem to see if any progress can be made.\",\n",
" # \"3. How could I measure progress on this problem?\",\n",
" \"4. How can I simplify the problem so that it is easier to solve?\",\n",
" \"5. What are the key assumptions underlying this problem?\",\n",
" \"6. What are the potential risks and drawbacks of each solution?\",\n",
" \"7. What are the alternative perspectives or viewpoints on this problem?\",\n",
" \"8. What are the long-term implications of this problem and its solutions?\",\n",
" \"9. How can I break down this problem into smaller, more manageable parts?\",\n",
" \"10. Critical Thinking: This style involves analyzing the problem from different perspectives, questioning assumptions, and evaluating the evidence or information available. It focuses on logical reasoning, evidence-based decision-making, and identifying potential biases or flaws in thinking.\",\n",
" \"11. Try creative thinking, generate innovative and out-of-the-box ideas to solve the problem. Explore unconventional solutions, thinking beyond traditional boundaries, and encouraging imagination and originality.\",\n",
" # \"12. Seek input and collaboration from others to solve the problem. Emphasize teamwork, open communication, and leveraging the diverse perspectives and expertise of a group to come up with effective solutions.\",\n",
" \"13. Use systems thinking: Consider the problem as part of a larger system and understanding the interconnectedness of various elements. Focuses on identifying the underlying causes, feedback loops, and interdependencies that influence the problem, and developing holistic solutions that address the system as a whole.\",\n",
" \"14. Use Risk Analysis: Evaluate potential risks, uncertainties, and tradeoffs associated with different solutions or approaches to a problem. Emphasize assessing the potential consequences and likelihood of success or failure, and making informed decisions based on a balanced analysis of risks and benefits.\",\n",
" # \"15. Use Reflective Thinking: Step back from the problem, take the time for introspection and self-reflection. Examine personal biases, assumptions, and mental models that may influence problem-solving, and being open to learning from past experiences to improve future approaches.\",\n",
" \"16. What is the core issue or problem that needs to be addressed?\",\n",
" \"17. What are the underlying causes or factors contributing to the problem?\",\n",
" \"18. Are there any potential solutions or strategies that have been tried before? If yes, what were the outcomes and lessons learned?\",\n",
" \"19. What are the potential obstacles or challenges that might arise in solving this problem?\",\n",
" \"20. Are there any relevant data or information that can provide insights into the problem? If yes, what data sources are available, and how can they be analyzed?\",\n",
" \"21. Are there any stakeholders or individuals who are directly affected by the problem? What are their perspectives and needs?\",\n",
" \"22. What resources (financial, human, technological, etc.) are needed to tackle the problem effectively?\",\n",
" \"23. How can progress or success in solving the problem be measured or evaluated?\",\n",
" \"24. What indicators or metrics can be used?\",\n",
" \"25. Is the problem a technical or practical one that requires a specific expertise or skill set? Or is it more of a conceptual or theoretical problem?\",\n",
" \"26. Does the problem involve a physical constraint, such as limited resources, infrastructure, or space?\",\n",
" \"27. Is the problem related to human behavior, such as a social, cultural, or psychological issue?\",\n",
" \"28. Does the problem involve decision-making or planning, where choices need to be made under uncertainty or with competing objectives?\",\n",
" \"29. Is the problem an analytical one that requires data analysis, modeling, or optimization techniques?\",\n",
" \"30. Is the problem a design challenge that requires creative solutions and innovation?\",\n",
" \"31. Does the problem require addressing systemic or structural issues rather than just individual instances?\",\n",
" \"32. Is the problem time-sensitive or urgent, requiring immediate attention and action?\",\n",
" \"33. What kinds of solution typically are produced for this kind of problem specification?\",\n",
" \"34. Given the problem specification and the current best solution, have a guess about other possible solutions.\"\n",
" \"35. Lets imagine the current best solution is totally wrong, what other ways are there to think about the problem specification?\"\n",
" \"36. What is the best way to modify this current best solution, given what you know about these kinds of problem specification?\"\n",
" \"37. Ignoring the current best solution, create an entirely new solution to the problem.\"\n",
" # \"38. Lets think step by step.\"\n",
" \"39. Lets make a step by step plan and implement it with good notation and explanation.\",\n",
"]\n",
"\n",
"\n",
"task_example = \"Lisa has 10 apples. She gives 3 apples to her friend and then buys 5 more apples from the store. How many apples does Lisa have now?\"\n",
"\n",
"task_example = \"\"\"This SVG path element <path d=\"M 55.57,80.69 L 57.38,65.80 M 57.38,65.80 L 48.90,57.46 M 48.90,57.46 L\n",
"45.58,47.78 M 45.58,47.78 L 53.25,36.07 L 66.29,48.90 L 78.69,61.09 L 55.57,80.69\"/> draws a:\n",
"(A) circle (B) heptagon (C) hexagon (D) kite (E) line (F) octagon (G) pentagon(H) rectangle (I) sector (J) triangle\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 20,
"id": "6cbfbe81-f751-42da-843a-f9003ace663d",
"metadata": {},
"outputs": [],
"source": [
"reasoning_modules_str = \"\\n\".join(reasoning_modules)"
]
},
{
"cell_type": "code",
"execution_count": 65,
"id": "d411c7aa-7017-4d67-88b5-43b5d161c34c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'task_description': 'This SVG path element <path d=\"M 55.57,80.69 L 57.38,65.80 M 57.38,65.80 L 48.90,57.46 M 48.90,57.46 L\\n45.58,47.78 M 45.58,47.78 L 53.25,36.07 L 66.29,48.90 L 78.69,61.09 L 55.57,80.69\"/> draws a:\\n(A) circle (B) heptagon (C) hexagon (D) kite (E) line (F) octagon (G) pentagon(H) rectangle (I) sector (J) triangle',\n",
" 'reasoning_modules': '1. How could I devise an experiment to help solve that problem?\\n2. Make a list of ideas for solving this problem, and apply them one by one to the problem to see if any progress can be made.\\n4. How can I simplify the problem so that it is easier to solve?\\n5. What are the key assumptions underlying this problem?\\n6. What are the potential risks and drawbacks of each solution?\\n7. What are the alternative perspectives or viewpoints on this problem?\\n8. What are the long-term implications of this problem and its solutions?\\n9. How can I break down this problem into smaller, more manageable parts?\\n10. Critical Thinking: This style involves analyzing the problem from different perspectives, questioning assumptions, and evaluating the evidence or information available. It focuses on logical reasoning, evidence-based decision-making, and identifying potential biases or flaws in thinking.\\n11. Try creative thinking, generate innovative and out-of-the-box ideas to solve the problem. Explore unconventional solutions, thinking beyond traditional boundaries, and encouraging imagination and originality.\\n13. Use systems thinking: Consider the problem as part of a larger system and understanding the interconnectedness of various elements. Focuses on identifying the underlying causes, feedback loops, and interdependencies that influence the problem, and developing holistic solutions that address the system as a whole.\\n14. Use Risk Analysis: Evaluate potential risks, uncertainties, and tradeoffs associated with different solutions or approaches to a problem. Emphasize assessing the potential consequences and likelihood of success or failure, and making informed decisions based on a balanced analysis of risks and benefits.\\n16. What is the core issue or problem that needs to be addressed?\\n17. What are the underlying causes or factors contributing to the problem?\\n18. Are there any potential solutions or strategies that have been tried before? If yes, what were the outcomes and lessons learned?\\n19. What are the potential obstacles or challenges that might arise in solving this problem?\\n20. Are there any relevant data or information that can provide insights into the problem? If yes, what data sources are available, and how can they be analyzed?\\n21. Are there any stakeholders or individuals who are directly affected by the problem? What are their perspectives and needs?\\n22. What resources (financial, human, technological, etc.) are needed to tackle the problem effectively?\\n23. How can progress or success in solving the problem be measured or evaluated?\\n24. What indicators or metrics can be used?\\n25. Is the problem a technical or practical one that requires a specific expertise or skill set? Or is it more of a conceptual or theoretical problem?\\n26. Does the problem involve a physical constraint, such as limited resources, infrastructure, or space?\\n27. Is the problem related to human behavior, such as a social, cultural, or psychological issue?\\n28. Does the problem involve decision-making or planning, where choices need to be made under uncertainty or with competing objectives?\\n29. Is the problem an analytical one that requires data analysis, modeling, or optimization techniques?\\n30. Is the problem a design challenge that requires creative solutions and innovation?\\n31. Does the problem require addressing systemic or structural issues rather than just individual instances?\\n32. Is the problem time-sensitive or urgent, requiring immediate attention and action?\\n33. What kinds of solution typically are produced for this kind of problem specification?\\n34. Given the problem specification and the current best solution, have a guess about other possible solutions.35. Lets imagine the current best solution is totally wrong, what other ways are there to think about the problem specification?36. What is the best way to modify this current best solution, given what you know about these kinds of problem specification?37. Ignoring the current best solution, create an entirely new solution to the problem.39. Lets make a step by step plan and implement it with good notation and explanation.',\n",
" 'selected_modules': 'To solve the task of identifying the shape drawn by the given SVG path element, the following reasoning modules are crucial:\\n\\n1. **Critical Thinking (10)**: This involves analyzing the SVG path commands and coordinates logically to understand the shape they form. It requires questioning assumptions (e.g., not assuming the shape based on a quick glance at the coordinates but rather analyzing the path commands and their implications) and evaluating the information provided by the SVG path data.\\n\\n2. **Analytical Problem Solving (29)**: The task requires data analysis skills to interpret the SVG path commands and coordinates. Understanding how the \"M\" (moveto) and \"L\" (lineto) commands work to draw lines between specified points is essential for determining the shape.\\n\\n3. **Creative Thinking (11)**: While the task primarily involves analytical skills, creative thinking can help in visualizing the shape that the path commands are likely to form, especially when the path data doesn\\'t immediately suggest a common shape.\\n\\n4. **Systems Thinking (13)**: Recognizing the SVG path as part of a larger system (in this case, the SVG graphics system) and understanding how individual path commands contribute to the overall shape can be helpful. This involves understanding the interconnectedness of the start and end points of each line segment and how they come together to form a complete shape.\\n\\n5. **Break Down the Problem (9)**: Breaking down the SVG path into its individual commands and analyzing each segment between \"M\" and \"L\" commands can simplify the task. This makes it easier to visualize and understand the shape being drawn step by step.\\n\\n6. **Visualization (not explicitly listed but implied in creative and analytical thinking)**: Visualizing the path that the \"M\" and \"L\" commands create is essential. This isn\\'t a listed module but is a skill that underpins both creative and analytical approaches to solving this problem.\\n\\nGiven the SVG path commands, one would analyze each segment drawn by \"M\" (moveto) and \"L\" (lineto) commands to determine the shape\\'s vertices and sides. This process involves critical thinking to assess the information, analytical skills to interpret the path data, and a degree of creative thinking for visualization. The task does not directly involve assessing risks, long-term implications, or stakeholder perspectives, so modules focused on those aspects (e.g., Risk Analysis (14), Long-term Implications (8)) are less relevant here.',\n",
" 'adapted_modules': 'To enhance the process of identifying the shape drawn by the given SVG path element, the reasoning modules can be adapted and specified as follows:\\n\\n1. **Detailed Path Analysis (Critical Thinking)**: This module focuses on a meticulous examination of the SVG path commands and coordinates. It involves a deep dive into the syntax and semantics of path commands such as \"M\" (moveto) and \"L\" (lineto), challenging initial perceptions and rigorously interpreting the sequence of commands to deduce the shape accurately. This analysis goes beyond surface-level inspection, requiring a systematic questioning of each command\\'s role in constructing the overall shape.\\n\\n2. **Path Command Interpretation (Analytical Problem Solving)**: Essential for this task is the ability to decode the SVG path\\'s \"M\" and \"L\" commands, translating these instructions into a mental or visual representation of the shape\\'s geometry. This module emphasizes the analytical dissection of the path data, focusing on how each command contributes to the formation of vertices and edges, thereby facilitating the identification of the shape.\\n\\n3. **Shape Visualization (Creative Thinking)**: Leveraging imagination to mentally construct the shape from the path commands is the core of this module. It involves creatively synthesizing the segments drawn by the \"M\" and \"L\" commands into a coherent visual image, even when the path data does not immediately suggest a recognizable shape. This creative process aids in bridging gaps in the analytical interpretation, offering alternative perspectives on the possible shape outcomes.\\n\\n4. **Path-to-Shape Synthesis (Systems Thinking)**: This module entails understanding the SVG path as a component within the broader context of vector graphics, focusing on how individual path commands interlink to form a cohesive shape. It requires an appreciation of the cumulative effect of each command in relation to the others, recognizing the systemic relationship between the starting and ending points of segments and their collective role in shaping the final figure.\\n\\n5. **Sequential Command Analysis (Break Down the Problem)**: By segmenting the SVG path into discrete commands, this approach simplifies the complexity of the task. It advocates for a step-by-step examination of the path, where each \"M\" to \"L\" sequence is analyzed in isolation before synthesizing the findings to understand the overall shape. This methodical breakdown facilitates a clearer visualization and comprehension of the shape being drawn.\\n\\n6. **Command-to-Geometry Mapping (Visualization)**: Central to solving this task is the ability to map the abstract \"M\" and \"L\" commands onto a concrete geometric representation. This implicit module underlies both the analytical and creative thinking processes, focusing on converting the path data into a visual form that can be easily understood and manipulated mentally. It is about constructing a mental image of the shape as each command is processed, enabling a dynamic visualization that evolves with each new piece of path data.\\n\\nBy adapting and specifying these reasoning modules, the task of identifying the shape drawn by the SVG path element becomes a structured process that leverages critical analysis, analytical problem-solving, creative visualization, systemic thinking, and methodical breakdown to accurately determine the shape as a (D) kite.',\n",
" 'reasoning_structure': '```json\\n{\\n \"Step 1: Detailed Path Analysis\": {\\n \"Description\": \"Examine each SVG path command and its coordinates closely. Understand the syntax and semantics of \\'M\\' (moveto) and \\'L\\' (lineto) commands.\",\\n \"Action\": \"List all path commands and their coordinates.\",\\n \"Expected Outcome\": \"A clear understanding of the sequence and direction of each path command.\"\\n },\\n \"Step 2: Path Command Interpretation\": {\\n \"Description\": \"Decode the \\'M\\' and \\'L\\' commands to translate these instructions into a mental or visual representation of the shape\\'s geometry.\",\\n \"Action\": \"Map each \\'M\\' and \\'L\\' command to its corresponding action (move or draw line) in the context of the shape.\",\\n \"Expected Outcome\": \"A segmented representation of the shape, highlighting vertices and edges.\"\\n },\\n \"Step 3: Shape Visualization\": {\\n \"Description\": \"Use imagination to mentally construct the shape from the path commands, synthesizing the segments into a coherent visual image.\",\\n \"Action\": \"Visualize the shape based on the segmented representation from Step 2.\",\\n \"Expected Outcome\": \"A mental image of the potential shape, considering the sequence and direction of path commands.\"\\n },\\n \"Step 4: Path-to-Shape Synthesis\": {\\n \"Description\": \"Understand the SVG path as a component within the broader context of vector graphics, focusing on how individual path commands interlink to form a cohesive shape.\",\\n \"Action\": \"Analyze the systemic relationship between the starting and ending points of segments and their collective role in shaping the final figure.\",\\n \"Expected Outcome\": \"Identification of the overall shape by recognizing the cumulative effect of each command.\"\\n },\\n \"Step 5: Sequential Command Analysis\": {\\n \"Description\": \"Segment the SVG path into discrete commands for a step-by-step examination, analyzing each \\'M\\' to \\'L\\' sequence in isolation.\",\\n \"Action\": \"Break down the path into individual commands and analyze each separately before synthesizing the findings.\",\\n \"Expected Outcome\": \"A clearer visualization and comprehension of the shape being drawn, segment by segment.\"\\n },\\n \"Step 6: Command-to-Geometry Mapping\": {\\n \"Description\": \"Map the abstract \\'M\\' and \\'L\\' commands onto a concrete geometric representation, constructing a mental image of the shape as each command is processed.\",\\n \"Action\": \"Convert the path data into a visual form that can be easily understood and manipulated mentally.\",\\n \"Expected Outcome\": \"A dynamic visualization of the shape that evolves with each new piece of path data, leading to the identification of the shape as a kite.\"\\n },\\n \"Conclusion\": {\\n \"Description\": \"Based on the analysis and visualization steps, determine the shape drawn by the SVG path element.\",\\n \"Action\": \"Review the outcomes of each step and synthesize the information to identify the shape.\",\\n \"Expected Outcome\": \"The correct identification of the shape, supported by the structured analysis and reasoning process.\"\\n }\\n}\\n```',\n",
" 'answer': 'Based on the provided reasoning structure and the SVG path element given, let\\'s analyze the path commands to identify the shape.\\n\\n**Step 1: Detailed Path Analysis**\\n- Description: The SVG path provided contains multiple \\'M\\' (moveto) and \\'L\\' (lineto) commands. Each command specifies a point in a 2D coordinate system.\\n- Action: The path commands are as follows:\\n 1. M 55.57,80.69 (Move to point)\\n 2. L 57.38,65.80 (Line to point)\\n 3. M 57.38,65.80 (Move to point)\\n 4. L 48.90,57.46 (Line to point)\\n 5. M 48.90,57.46 (Move to point)\\n 6. L 45.58,47.78 (Line to point)\\n 7. M 45.58,47.78 (Move to point)\\n 8. L 53.25,36.07 (Line to point)\\n 9. L 66.29,48.90 (Line to point)\\n 10. L 78.69,61.09 (Line to point)\\n 11. L 55.57,80.69 (Line to point)\\n- Expected Outcome: Understanding that the path commands describe a series of movements and lines that form a closed shape.\\n\\n**Step 2: Path Command Interpretation**\\n- Description: The \\'M\\' and \\'L\\' commands are used to move the \"pen\" to a starting point and draw lines to subsequent points, respectively.\\n- Action: The commands describe a shape starting at (55.57,80.69), drawing lines through several points, and finally closing the shape by returning to the starting point.\\n- Expected Outcome: A segmented representation showing a shape with distinct vertices at the specified coordinates.\\n\\n**Step 3: Shape Visualization**\\n- Description: Mentally constructing the shape from the provided path commands.\\n- Action: Visualizing the lines connecting in sequence from the starting point, through each point described by the \\'L\\' commands, and back to the starting point.\\n- Expected Outcome: A mental image of a shape that appears to have four distinct sides, suggesting it could be a quadrilateral.\\n\\n**Step 4: Path-to-Shape Synthesis**\\n- Description: Understanding how the path commands collectively form a specific shape.\\n- Action: Recognizing that the shape starts and ends at the same point, with lines drawn between intermediate points without overlapping, except at the starting/ending point.\\n- Expected Outcome: Identification of a closed, four-sided figure, which suggests it could be a kite based on the symmetry and structure of the lines.\\n\\n**Step 5: Sequential Command Analysis**\\n- Description: Analyzing each \\'M\\' to \\'L\\' sequence in isolation.\\n- Action: Observing that the path does not describe a regular polygon (like a hexagon or octagon) or a circle, but rather a shape with distinct angles and sides.\\n- Expected Outcome: A clearer understanding that the shape has four sides, with two pairs of adjacent sides being potentially unequal, which is characteristic of a kite.\\n\\n**Step 6: Command-to-Geometry Mapping**\\n- Description: Converting the abstract path commands into a geometric shape.\\n- Action: Mapping the path data to visualize a shape with two pairs of adjacent sides that are distinct yet symmetrical, indicative of a kite.\\n- Expected Outcome: A dynamic visualization that evolves to clearly represent a kite shape.\\n\\n**Conclusion**\\n- Description: Determining the shape drawn by the SVG path element.\\n- Action: Reviewing the outcomes of each analysis step, which consistently point towards a four-sided figure with distinct properties of a kite.\\n- Expected Outcome: The correct identification of the shape as a kite (D).'}"
]
},
"execution_count": 65,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"overall_chain.invoke(\n",
" {\"task_description\": task_example, \"reasoning_modules\": reasoning_modules_str}\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ea8568d5-bdb6-45cd-8d04-1ab305786caa",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "c14a291c-7c1b-43bc-807e-11180290985e",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -31,7 +31,7 @@
"metadata": {},
"outputs": [],
"source": [
"!pip install langchain lark openai elasticsearch pandas"
"!pip install langchain langchain-elasticsearch lark openai elasticsearch pandas"
]
},
{
@@ -1083,7 +1083,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import ElasticsearchStore\n",
"from langchain_elasticsearch import ElasticsearchStore\n",
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"embeddings = OpenAIEmbeddings()"

View File

@@ -209,7 +209,7 @@
}
],
"source": [
"chain.run({})"
"chain.invoke({})"
]
},
{

View File

@@ -670,8 +670,6 @@ local_llm = HuggingFacePipeline(pipeline=pipe)
<CodeOutputBlock lang="python">
```
/workspace/langchain/.venv/lib/python3.9/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
Loading checkpoint shards: 100%|██████████| 8/8 [00:32<00:00, 4.11s/it]
```

View File

@@ -39,7 +39,7 @@
"data = loader.load()\n",
"\n",
"# Split\n",
"from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=0)\n",
"all_splits = text_splitter.split_documents(data)\n",

View File

@@ -2610,7 +2610,7 @@
}
],
"source": [
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"\n",
"text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)\n",
"texts = text_splitter.split_documents(docs)"

View File

@@ -401,7 +401,7 @@
")\n",
"from langchain.chains import LLMChain\n",
"from langchain.prompts import StringPromptTemplate\n",
"from langchain.schema import AgentAction, AgentFinish"
"from langchain_core.agents import AgentAction, AgentFinish"
]
},
{

12
docker/Makefile Normal file
View File

@@ -0,0 +1,12 @@
# Makefile
build_graphdb:
docker build --tag graphdb ./graphdb
start_graphdb:
docker-compose up -d graphdb
down:
docker-compose down -v --remove-orphans
.PHONY: build_graphdb start_graphdb down

79
docker/docker-compose.yml Normal file
View File

@@ -0,0 +1,79 @@
# docker-compose to make it easier to spin up integration tests.
# Services should use NON standard ports to avoid collision with
# any existing services that might be used for development.
# ATTENTION: When adding a service below use a non-standard port
# increment by one from the preceding port.
# For credentials always use `langchain` and `langchain` for the
# username and password.
version: "3"
name: langchain-tests
services:
redis:
image: redis/redis-stack-server:latest
# We use non standard ports since
# these instances are used for testing
# and users may already have existing
# redis instances set up locally
# for other projects
ports:
- "6020:6379"
volumes:
- ./redis-volume:/data
graphdb:
image: graphdb
ports:
- "6021:7200"
mongo:
image: mongo:latest
container_name: mongo_container
ports:
- "6022:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: langchain
MONGO_INITDB_ROOT_PASSWORD: langchain
postgres:
image: postgres:16
environment:
POSTGRES_DB: langchain
POSTGRES_USER: langchain
POSTGRES_PASSWORD: langchain
ports:
- "6023:5432"
command: |
postgres -c log_statement=all
healthcheck:
test:
[
"CMD-SHELL",
"psql postgresql://langchain:langchain@localhost/langchain --command 'SELECT 1;' || exit 1",
]
interval: 5s
retries: 60
volumes:
- postgres_data:/var/lib/postgresql/data
pgvector:
# postgres with the pgvector extension
image: ankane/pgvector
environment:
POSTGRES_DB: langchain
POSTGRES_USER: langchain
POSTGRES_PASSWORD: langchain
ports:
- "6024:5432"
command: |
postgres -c log_statement=all
healthcheck:
test:
[
"CMD-SHELL",
"psql postgresql://langchain:langchain@localhost/langchain --command 'SELECT 1;' || exit 1",
]
interval: 5s
retries: 60
volumes:
- postgres_data_pgvector:/var/lib/postgresql/data
volumes:
postgres_data:
postgres_data_pgvector:

View File

@@ -0,0 +1,5 @@
FROM ontotext/graphdb:10.5.1
RUN mkdir -p /opt/graphdb/dist/data/repositories/langchain
COPY config.ttl /opt/graphdb/dist/data/repositories/langchain/
COPY graphdb_create.sh /run.sh
ENTRYPOINT bash /run.sh

46
docker/graphdb/config.ttl Normal file
View File

@@ -0,0 +1,46 @@
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>.
@prefix rep: <http://www.openrdf.org/config/repository#>.
@prefix sr: <http://www.openrdf.org/config/repository/sail#>.
@prefix sail: <http://www.openrdf.org/config/sail#>.
@prefix graphdb: <http://www.ontotext.com/config/graphdb#>.
[] a rep:Repository ;
rep:repositoryID "langchain" ;
rdfs:label "" ;
rep:repositoryImpl [
rep:repositoryType "graphdb:SailRepository" ;
sr:sailImpl [
sail:sailType "graphdb:Sail" ;
graphdb:read-only "false" ;
# Inference and Validation
graphdb:ruleset "empty" ;
graphdb:disable-sameAs "true" ;
graphdb:check-for-inconsistencies "false" ;
# Indexing
graphdb:entity-id-size "32" ;
graphdb:enable-context-index "false" ;
graphdb:enablePredicateList "true" ;
graphdb:enable-fts-index "false" ;
graphdb:fts-indexes ("default" "iri") ;
graphdb:fts-string-literals-index "default" ;
graphdb:fts-iris-index "none" ;
# Queries and Updates
graphdb:query-timeout "0" ;
graphdb:throw-QueryEvaluationException-on-timeout "false" ;
graphdb:query-limit-results "0" ;
# Settable in the file but otherwise hidden in the UI and in the RDF4J console
graphdb:base-URL "http://example.org/owlim#" ;
graphdb:defaultNS "" ;
graphdb:imports "" ;
graphdb:repository-type "file-repository" ;
graphdb:storage-folder "storage" ;
graphdb:entity-index-size "10000000" ;
graphdb:in-memory-literal-properties "true" ;
graphdb:enable-literal-index "true" ;
]
].

View File

@@ -0,0 +1,28 @@
#! /bin/bash
REPOSITORY_ID="langchain"
GRAPHDB_URI="http://localhost:7200/"
echo -e "\nUsing GraphDB: ${GRAPHDB_URI}"
function startGraphDB {
echo -e "\nStarting GraphDB..."
exec /opt/graphdb/dist/bin/graphdb
}
function waitGraphDBStart {
echo -e "\nWaiting GraphDB to start..."
for _ in $(seq 1 5); do
CHECK_RES=$(curl --silent --write-out '%{http_code}' --output /dev/null ${GRAPHDB_URI}/rest/repositories)
if [ "${CHECK_RES}" = '200' ]; then
echo -e "\nUp and running"
break
fi
sleep 30s
echo "CHECK_RES: ${CHECK_RES}"
done
}
startGraphDB &
waitGraphDBStart
wait

2
docs/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
/.quarto/
src/supabase.d.ts

View File

@@ -16,8 +16,12 @@ cp ../cookbook/README.md src/pages/cookbook.mdx
mkdir -p docs/templates
cp ../templates/docs/INDEX.md docs/templates/index.md
poetry run python scripts/copy_templates.py
wget https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
wget -q https://raw.githubusercontent.com/langchain-ai/langgraph/main/README.md -O docs/langgraph.md
yarn
poetry run quarto preview docs
poetry run quarto render versioned_docs
poetry run quarto render docs
yarn start

1
docs/.yarnrc.yml Normal file
View File

@@ -0,0 +1 @@
nodeLinker: node-modules

View File

@@ -1,4 +1,5 @@
"""Configuration file for the Sphinx documentation builder."""
# Configuration file for the Sphinx documentation builder.
#
# This file only contains a selection of the most common options. For a full
@@ -49,7 +50,7 @@ class ExampleLinksDirective(SphinxDirective):
class_or_func_name = self.arguments[0]
links = imported_classes.get(class_or_func_name, {})
list_node = nodes.bullet_list()
for doc_name, link in links.items():
for doc_name, link in sorted(links.items()):
item_node = nodes.list_item()
para_node = nodes.paragraph()
link_node = nodes.reference()
@@ -114,8 +115,8 @@ autodoc_pydantic_field_signature_prefix = "param"
autodoc_member_order = "groupwise"
autoclass_content = "both"
autodoc_typehints_format = "short"
autodoc_typehints = "both"
# autodoc_typehints = "description"
# Add any paths that contain templates here, relative to this directory.
templates_path = ["templates"]
@@ -146,6 +147,7 @@ partners = [
(p.name, p.name.replace("-", "_") + "_api_reference")
for p in partners_dir.iterdir()
]
partners = sorted(partners)
html_context = {
"display_github": True, # Integrate GitHub
@@ -173,3 +175,6 @@ myst_enable_extensions = ["colon_fence"]
# generate autosummary even if no references
autosummary_generate = True
html_copy_source = False
html_show_sourcelink = False

View File

@@ -1,7 +1,9 @@
"""Script for auto-generating api_reference.rst."""
import importlib
import inspect
import os
import sys
import typing
from enum import Enum
from pathlib import Path
@@ -13,7 +15,6 @@ from pydantic import BaseModel
ROOT_DIR = Path(__file__).parents[2].absolute()
HERE = Path(__file__).parent
ClassKind = Literal["TypedDict", "Regular", "Pydantic", "enum"]
@@ -186,7 +187,7 @@ def _load_package_modules(
modules_by_namespace[top_namespace] = _module_members
except ImportError as e:
print(f"Error: Unable to import module '{namespace}' with error: {e}")
print(f"Error: Unable to import module '{namespace}' with error: {e}") # noqa: T201
return modules_by_namespace
@@ -217,8 +218,8 @@ def _construct_doc(
for module in namespaces:
_members = members_by_namespace[module]
classes = _members["classes_"]
functions = _members["functions"]
classes = [el for el in _members["classes_"] if el["is_public"]]
functions = [el for el in _members["functions"] if el["is_public"]]
if not (classes or functions):
continue
section = f":mod:`{package_namespace}.{module}`"
@@ -244,9 +245,6 @@ Classes
"""
for class_ in sorted(classes, key=lambda c: c["qualified_name"]):
if not class_["is_public"]:
continue
if class_["kind"] == "TypedDict":
template = "typeddict.rst"
elif class_["kind"] == "enum":
@@ -264,7 +262,7 @@ Classes
"""
if functions:
_functions = [f["qualified_name"] for f in functions if f["is_public"]]
_functions = [f["qualified_name"] for f in functions]
fstring = "\n ".join(sorted(_functions))
full_doc += f"""\
Functions
@@ -309,7 +307,14 @@ def _package_namespace(package_name: str) -> str:
def _package_dir(package_name: str = "langchain") -> Path:
"""Return the path to the directory containing the documentation."""
if package_name in ("langchain", "experimental", "community", "core", "cli"):
if package_name in (
"langchain",
"experimental",
"community",
"core",
"cli",
"text-splitters",
):
return ROOT_DIR / "libs" / package_name / _package_namespace(package_name)
else:
return (
@@ -322,31 +327,54 @@ def _package_dir(package_name: str = "langchain") -> Path:
def _get_package_version(package_dir: Path) -> str:
with open(package_dir.parent / "pyproject.toml", "r") as f:
pyproject = toml.load(f)
"""Return the version of the package."""
try:
with open(package_dir.parent / "pyproject.toml", "r") as f:
pyproject = toml.load(f)
except FileNotFoundError as e:
print(
f"pyproject.toml not found in {package_dir.parent}.\n"
"You are either attempting to build a directory which is not a package or "
"the package is missing a pyproject.toml file which should be added."
"Aborting the build."
)
exit(1)
return pyproject["tool"]["poetry"]["version"]
def _out_file_path(package_name: str = "langchain") -> Path:
def _out_file_path(package_name: str) -> Path:
"""Return the path to the file containing the documentation."""
return HERE / f"{package_name.replace('-', '_')}_api_reference.rst"
def _doc_first_line(package_name: str = "langchain") -> str:
def _doc_first_line(package_name: str) -> str:
"""Return the path to the file containing the documentation."""
return f".. {package_name.replace('-', '_')}_api_reference:\n\n"
def main() -> None:
def main(dirs: Optional[list] = None) -> None:
"""Generate the api_reference.rst file for each package."""
for dir in os.listdir(ROOT_DIR / "libs"):
if dir in ("cli", "partners"):
print("Starting to build API reference files.")
if not dirs:
dirs = [
dir_
for dir_ in os.listdir(ROOT_DIR / "libs")
if dir_ not in ("cli", "partners")
]
dirs += os.listdir(ROOT_DIR / "libs" / "partners")
for dir_ in dirs:
# Skip any hidden directories
# Some of these could be present by mistake in the code base
# e.g., .pytest_cache from running tests from the wrong location.
if dir_.startswith("."):
print("Skipping dir:", dir_)
continue
else:
_build_rst_file(package_name=dir)
for dir in os.listdir(ROOT_DIR / "libs" / "partners"):
_build_rst_file(package_name=dir)
print("Building package:", dir_)
_build_rst_file(package_name=dir_)
print("API reference files built.")
if __name__ == "__main__":
main()
dirs = sys.argv[1:] or None
main(dirs=dirs)

File diff suppressed because one or more lines are too long

View File

@@ -43,6 +43,9 @@
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('experimental_api_reference') }}">Experimental</a>
</li>
<li class="nav-item">
<a class="sk-nav-link nav-link" href="{{ pathto('text_splitters_api_reference') }}">Text splitters</a>
</li>
{%- for title, pathname in partners %}
<li class="nav-item">
<a class="sk-nav-link nav-link nav-more-item-mobile-items" href="{{ pathto(pathname) }}">{{ title }}</a>

View File

@@ -5,7 +5,7 @@
<script type="text/javascript" src="{{ pathto('_static/doctools.js', 1) }}"></script>
<script type="text/javascript" src="{{ pathto('_static/language_data.js', 1) }}"></script>
<script type="text/javascript" src="{{ pathto('_static/searchtools.js', 1) }}"></script>
<!-- <script type="text/javascript" src="{{ pathto('_static/sphinx_highlight.js', 1) }}"></script> -->
<script type="text/javascript" src="{{ pathto('_static/sphinx_highlight.js', 1) }}"></script>
<script type="text/javascript">
$(document).ready(function() {
if (!Search.out) {

3194
docs/data/people.yml Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -37,7 +37,7 @@ from langchain_community.llms import integration_class_REPLACE_ME
## Text Embedding Models
See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME)
See a [usage example](/docs/integrations/text_embedding/INCLUDE_REAL_NAME).
```python
from langchain_community.embeddings import integration_class_REPLACE_ME
@@ -45,7 +45,7 @@ from langchain_community.embeddings import integration_class_REPLACE_ME
## Chat models
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME)
See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME).
```python
from langchain_community.chat_models import integration_class_REPLACE_ME

View File

@@ -1,151 +1,50 @@
# Tutorials
Below are links to tutorials and courses on LangChain. For written guides on common use cases for LangChain, check out the [use cases guides](/docs/use_cases).
## Books and Handbooks
⛓ icon marks a new addition [last update 2023-09-21]
- [Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
- [LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
- [LangChain Cheatsheet](https://pub.towardsai.net/langchain-cheatsheet-all-secrets-on-a-single-page-8be26b721cde) by **Ivan Reznikov**
---------------------
### [LangChain](https://en.wikipedia.org/wiki/LangChain) on Wikipedia
### Books
#### ⛓[Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
### DeepLearning.AI courses
by [Harrison Chase](https://en.wikipedia.org/wiki/LangChain) and [Andrew Ng](https://en.wikipedia.org/wiki/Andrew_Ng)
- [LangChain for LLM Application Development](https://learn.deeplearning.ai/langchain)
- [LangChain Chat with Your Data](https://learn.deeplearning.ai/langchain-chat-with-your-data)
- ⛓ [Functions, Tools and Agents with LangChain](https://learn.deeplearning.ai/functions-tools-agents-langchain)
### Handbook
[LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
### Short Tutorials
[LangChain Explained in 13 Minutes | QuickStart Tutorial for Beginners](https://youtu.be/aywZrzNaKjs) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
[LangChain Crash Course: Build an AutoGPT app in 25 minutes](https://youtu.be/MlK6SIjcjE8) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)
[LangChain Crash Course - Build apps with language models](https://youtu.be/LbT1yp6quS8) by [Patrick Loeber](https://www.youtube.com/@patloeber)
## Tutorials
### [LangChain for Gen AI and LLMs](https://www.youtube.com/playlist?list=PLIUOU7oqGTLieV9uTIFMm6_4PXg-hlN6F) by [James Briggs](https://www.youtube.com/@jamesbriggs)
- #1 [Getting Started with `GPT-3` vs. Open Source LLMs](https://youtu.be/nE2skSRWTTs)
- #2 [Prompt Templates for `GPT 3.5` and other LLMs](https://youtu.be/RflBcK0oDH0)
- #3 [LLM Chains using `GPT 3.5` and other LLMs](https://youtu.be/S8j9Tk0lZHU)
- [LangChain Data Loaders, Tokenizers, Chunking, and Datasets - Data Prep 101](https://youtu.be/eqOfr4AGLk8)
- #4 [Chatbot Memory for `Chat-GPT`, `Davinci` + other LLMs](https://youtu.be/X05uK0TZozM)
- #5 [Chat with OpenAI in LangChain](https://youtu.be/CnAgB3A5OlU)
- #6 [Fixing LLM Hallucinations with Retrieval Augmentation in LangChain](https://youtu.be/kvdVduIJsc8)
- #7 [LangChain Agents Deep Dive with `GPT 3.5`](https://youtu.be/jSP-gSEyVeI)
- #8 [Create Custom Tools for Chatbots in LangChain](https://youtu.be/q-HNphrWsDE)
- #9 [Build Conversational Agents with Vector DBs](https://youtu.be/H6bCqqw9xyI)
- [Using NEW `MPT-7B` in Hugging Face and LangChain](https://youtu.be/DXpk9K7DgMo)
- [`MPT-30B` Chatbot with LangChain](https://youtu.be/pnem-EhT6VI)
- ⛓ [Fine-tuning OpenAI's `GPT 3.5` for LangChain Agents](https://youtu.be/boHXgQ5eQic?si=OOOfK-GhsgZGBqSr)
- ⛓ [Chatbots with `RAG`: LangChain Full Walkthrough](https://youtu.be/LhnCsygAvzY?si=N7k6xy4RQksbWwsQ)
### [by Greg Kamradt](https://www.youtube.com/playlist?list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5)
### [by Sam Witteveen](https://www.youtube.com/playlist?list=PL8motc6AQftk1Bs42EW45kwYbyJ4jOdiZ)
### [by James Briggs](https://www.youtube.com/playlist?list=PLIUOU7oqGTLieV9uTIFMm6_4PXg-hlN6F)
### [by Prompt Engineering](https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr)
### [by Mayo Oshin](https://www.youtube.com/@chatwithdata/search?query=langchain)
### [by 1 little Coder](https://www.youtube.com/playlist?list=PLpdmBGJ6ELUK-v0MK-t4wZmVEbxM5xk6L)
### [LangChain 101](https://www.youtube.com/playlist?list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5) by [Greg Kamradt (Data Indy)](https://www.youtube.com/@DataIndependent)
- [What Is LangChain? - LangChain + `ChatGPT` Overview](https://youtu.be/_v_fgW2SkkQ)
- [Quickstart Guide](https://youtu.be/kYRB-vJFy38)
- [Beginner's Guide To 7 Essential Concepts](https://youtu.be/2xxziIWmaSA)
- [Beginner's Guide To 9 Use Cases](https://youtu.be/vGP4pQdCocw)
- [Agents Overview + Google Searches](https://youtu.be/Jq9Sf68ozk0)
- [`OpenAI` + `Wolfram Alpha`](https://youtu.be/UijbzCIJ99g)
- [Ask Questions On Your Custom (or Private) Files](https://youtu.be/EnT-ZTrcPrg)
- [Connect `Google Drive Files` To `OpenAI`](https://youtu.be/IqqHqDcXLww)
- [`YouTube Transcripts` + `OpenAI`](https://youtu.be/pNcQ5XXMgH4)
- [Question A 300 Page Book (w/ `OpenAI` + `Pinecone`)](https://youtu.be/h0DHDp1FbmQ)
- [Workaround `OpenAI's` Token Limit With Chain Types](https://youtu.be/f9_BWhCI4Zo)
- [Build Your Own OpenAI + LangChain Web App in 23 Minutes](https://youtu.be/U_eV8wfMkXU)
- [Working With The New `ChatGPT API`](https://youtu.be/e9P7FLi5Zy8)
- [OpenAI + LangChain Wrote Me 100 Custom Sales Emails](https://youtu.be/y1pyAQM-3Bo)
- [Structured Output From `OpenAI` (Clean Dirty Data)](https://youtu.be/KwAXfey-xQk)
- [Connect `OpenAI` To +5,000 Tools (LangChain + `Zapier`)](https://youtu.be/7tNm0yiDigU)
- [Use LLMs To Extract Data From Text (Expert Mode)](https://youtu.be/xZzvwR9jdPA)
- [Extract Insights From Interview Transcripts Using LLMs](https://youtu.be/shkMOHwJ4SM)
- [5 Levels Of LLM Summarizing: Novice to Expert](https://youtu.be/qaPMdcCqtWk)
- [Control Tone & Writing Style Of Your LLM Output](https://youtu.be/miBG-a3FuhU)
- [Build Your Own `AI Twitter Bot` Using LLMs](https://youtu.be/yLWLDjT01q8)
- [ChatGPT made my interview questions for me (`Streamlit` + LangChain)](https://youtu.be/zvoAMx0WKkw)
- [Function Calling via ChatGPT API - First Look With LangChain](https://youtu.be/0-zlUy7VUjg)
- [Extract Topics From Video/Audio With LLMs (Topic Modeling w/ LangChain)](https://youtu.be/pEkxRQFNAs4)
## Courses
### Featured courses on Deeplearning.AI
### [LangChain How to and guides](https://www.youtube.com/playlist?list=PL8motc6AQftk1Bs42EW45kwYbyJ4jOdiZ) by [Sam Witteveen](https://www.youtube.com/@samwitteveenai)
- [LangChain Basics - LLMs & PromptTemplates with Colab](https://youtu.be/J_0qvRt4LNk)
- [LangChain Basics - Tools and Chains](https://youtu.be/hI2BY7yl_Ac)
- [`ChatGPT API` Announcement & Code Walkthrough with LangChain](https://youtu.be/phHqvLHCwH4)
- [Conversations with Memory (explanation & code walkthrough)](https://youtu.be/X550Zbz_ROE)
- [Chat with `Flan20B`](https://youtu.be/VW5LBavIfY4)
- [Using `Hugging Face Models` locally (code walkthrough)](https://youtu.be/Kn7SX2Mx_Jk)
- [`PAL`: Program-aided Language Models with LangChain code](https://youtu.be/dy7-LvDu-3s)
- [Building a Summarization System with LangChain and `GPT-3` - Part 1](https://youtu.be/LNq_2s_H01Y)
- [Building a Summarization System with LangChain and `GPT-3` - Part 2](https://youtu.be/d-yeHDLgKHw)
- [Microsoft's `Visual ChatGPT` using LangChain](https://youtu.be/7YEiEyfPF5U)
- [LangChain Agents - Joining Tools and Chains with Decisions](https://youtu.be/ziu87EXZVUE)
- [Comparing LLMs with LangChain](https://youtu.be/rFNG0MIEuW0)
- [Using `Constitutional AI` in LangChain](https://youtu.be/uoVqNFDwpX4)
- [Talking to `Alpaca` with LangChain - Creating an Alpaca Chatbot](https://youtu.be/v6sF8Ed3nTE)
- [Talk to your `CSV` & `Excel` with LangChain](https://youtu.be/xQ3mZhw69bc)
- [`BabyAGI`: Discover the Power of Task-Driven Autonomous Agents!](https://youtu.be/QBcDLSE2ERA)
- [Improve your `BabyAGI` with LangChain](https://youtu.be/DRgPyOXZ-oE)
- [Master `PDF` Chat with LangChain - Your essential guide to queries on documents](https://youtu.be/ZzgUqFtxgXI)
- [Using LangChain with `DuckDuckGO`, `Wikipedia` & `PythonREPL` Tools](https://youtu.be/KerHlb8nuVc)
- [Building Custom Tools and Agents with LangChain (gpt-3.5-turbo)](https://youtu.be/biS8G8x8DdA)
- [LangChain Retrieval QA Over Multiple Files with `ChromaDB`](https://youtu.be/3yPBVii7Ct0)
- [LangChain Retrieval QA with Instructor Embeddings & `ChromaDB` for PDFs](https://youtu.be/cFCGUjc33aU)
- [LangChain + Retrieval Local LLMs for Retrieval QA - No OpenAI!!!](https://youtu.be/9ISVjh8mdlA)
- [`Camel` + LangChain for Synthetic Data & Market Research](https://youtu.be/GldMMK6-_-g)
- [Information Extraction with LangChain & `Kor`](https://youtu.be/SW1ZdqH0rRQ)
- [Converting a LangChain App from OpenAI to OpenSource](https://youtu.be/KUDn7bVyIfc)
- [Using LangChain `Output Parsers` to get what you want out of LLMs](https://youtu.be/UVn2NroKQCw)
- [Building a LangChain Custom Medical Agent with Memory](https://youtu.be/6UFtRwWnHws)
- [Understanding `ReACT` with LangChain](https://youtu.be/Eug2clsLtFs)
- [`OpenAI Functions` + LangChain : Building a Multi Tool Agent](https://youtu.be/4KXK6c6TVXQ)
- [What can you do with 16K tokens in LangChain?](https://youtu.be/z2aCZBAtWXs)
- [Tagging and Extraction - Classification using `OpenAI Functions`](https://youtu.be/a8hMgIcUEnE)
- [HOW to Make Conversational Form with LangChain](https://youtu.be/IT93On2LB5k)
- ⛓ [`Claude-2` meets LangChain!](https://youtu.be/Hb_D3p0bK2U?si=j96Kc7oJoeRI5-iC)
- ⛓ [`PaLM 2` Meets LangChain](https://youtu.be/orPwLibLqm4?si=KgJjpEbAD9YBPqT4)
- ⛓ [`LLaMA2` with LangChain - Basics | LangChain TUTORIAL](https://youtu.be/cIRzwSXB4Rc?si=v3Hwxk1m3fksBIHN)
- ⛓ [Serving `LLaMA2` with `Replicate`](https://youtu.be/JIF4nNi26DE?si=dSazFyC4UQmaR-rJ)
- ⛓ [NEW LangChain Expression Language](https://youtu.be/ud7HJ2p3gp0?si=8pJ9O6hGbXrCX5G9)
- ⛓ [Building a RCI Chain for Agents with LangChain Expression Language](https://youtu.be/QaKM5s0TnsY?si=0miEj-o17AHcGfLG)
- ⛓ [How to Run `LLaMA-2-70B` on the `Together AI`](https://youtu.be/Tc2DHfzHeYE?si=Xku3S9dlBxWQukpe)
- ⛓ [`RetrievalQA` with `LLaMA 2 70b` & `Chroma` DB](https://youtu.be/93yueQQnqpM?si=ZMwj-eS_CGLnNMXZ)
- ⛓ [How to use `BGE Embeddings` for LangChain](https://youtu.be/sWRvSG7vL4g?si=85jnvnmTCF9YIWXI)
- ⛓ [How to use Custom Prompts for `RetrievalQA` on `LLaMA-2 7B`](https://youtu.be/PDwUKves9GY?si=sMF99TWU0p4eiK80)
- [LangChain for LLM Application Development](https://learn.deeplearning.ai/langchain)
- [LangChain Chat with Your Data](https://learn.deeplearning.ai/langchain-chat-with-your-data)
- [Functions, Tools and Agents with LangChain](https://learn.deeplearning.ai/functions-tools-agents-langchain)
- [Build LLM Apps with LangChain.js](https://learn.deeplearning.ai/courses/build-llm-apps-with-langchain-js)
### Online courses
### [LangChain](https://www.youtube.com/playlist?list=PLVEEucA9MYhOu89CX8H3MBZqayTbcCTMr) by [Prompt Engineering](https://www.youtube.com/@engineerprompt)
- [LangChain Crash Course — All You Need to Know to Build Powerful Apps with LLMs](https://youtu.be/5-fc4Tlgmro)
- [Working with MULTIPLE `PDF` Files in LangChain: `ChatGPT` for your Data](https://youtu.be/s5LhRdh5fu4)
- [`ChatGPT` for YOUR OWN `PDF` files with LangChain](https://youtu.be/TLf90ipMzfE)
- [Talk to YOUR DATA without OpenAI APIs: LangChain](https://youtu.be/wrD-fZvT6UI)
- [LangChain: `PDF` Chat App (GUI) | `ChatGPT` for Your `PDF` FILES](https://youtu.be/RIWbalZ7sTo)
- [`LangFlow`: Build Chatbots without Writing Code](https://youtu.be/KJ-ux3hre4s)
- [LangChain: Giving Memory to LLMs](https://youtu.be/dxO6pzlgJiY)
- [BEST OPEN Alternative to `OPENAI's EMBEDDINGs` for Retrieval QA: LangChain](https://youtu.be/ogEalPMUCSY)
- [LangChain: Run Language Models Locally - `Hugging Face Models`](https://youtu.be/Xxxuw4_iCzw)
- ⛓ [Slash API Costs: Mastering Caching for LLM Applications](https://youtu.be/EQOznhaJWR0?si=AXoI7f3-SVFRvQUl)
- ⛓ [Avoid PROMPT INJECTION with `Constitutional AI` - LangChain](https://youtu.be/tyKSkPFHVX8?si=9mgcB5Y1kkotkBGB)
- [Udemy](https://www.udemy.com/courses/search/?q=langchain)
- [Pluralsight](https://www.pluralsight.com/search?q=langchain)
- [Coursera](https://www.coursera.org/search?query=langchain)
- [Maven](https://maven.com/courses?query=langchain)
- [Udacity](https://www.udacity.com/catalog/all/any-price/any-school/any-skill/any-difficulty/any-duration/any-type/relevance/page-1?searchValue=langchain)
- [LinkedIn Learning](https://www.linkedin.com/search/results/learning/?keywords=langchain)
- [edX](https://www.edx.org/search?q=langchain)
## Short Tutorials
### LangChain by [Chat with data](https://www.youtube.com/@chatwithdata)
- [LangChain Beginner's Tutorial for `Typescript`/`Javascript`](https://youtu.be/bH722QgRlhQ)
- [`GPT-4` Tutorial: How to Chat With Multiple `PDF` Files (~1000 pages of Tesla's 10-K Annual Reports)](https://youtu.be/Ix9WIZpArm0)
- [`GPT-4` & LangChain Tutorial: How to Chat With A 56-Page `PDF` Document (w/`Pinecone`)](https://youtu.be/ih9PBGVVOO4)
- [LangChain & `Supabase` Tutorial: How to Build a ChatGPT Chatbot For Your Website](https://youtu.be/R2FMzcsmQY8)
- [LangChain Agents: Build Personal Assistants For Your Data (Q&A with Harrison Chase and Mayo Oshin)](https://youtu.be/gVkF8cwfBLI)
### Codebase Analysis
- [Codebase Analysis: Langchain Agents](https://carbonated-yacht-2c5.notion.site/Codebase-Analysis-Langchain-Agents-0b0587acd50647ca88aaae7cff5df1f2)
- [by Nicholas Renotte](https://youtu.be/MlK6SIjcjE8)
- [by Patrick Loeber](https://youtu.be/LbT1yp6quS8)
- [by Rabbitmetrics](https://youtu.be/aywZrzNaKjs)
- [by Ivan Reznikov](https://medium.com/@ivanreznikov/langchain-101-course-updated-668f7b41d6cb)
## [Documentation: Use cases](/docs/use_cases)
---------------------
⛓ icon marks a new addition [last update 2023-09-21]

View File

@@ -120,6 +120,8 @@
- ⛓ [Use ANY language in `LangSmith` with REST](https://youtu.be/7BL0GEdMmgY?si=iXfOEdBLqXF6hqRM) by [Nerding I/O](https://www.youtube.com/@nerding_io)
- ⛓ [How to Leverage the Full Potential of LLMs for Your Business with Langchain - Leon Ruddat](https://youtu.be/vZmoEa7oWMg?si=ZhMmydq7RtkZd56Q) by [PyData](https://www.youtube.com/@PyDataTV)
- ⛓ [`ChatCSV` App: Chat with CSV files using LangChain and `Llama 2`](https://youtu.be/PvsMg6jFs8E?si=Qzg5u5gijxj933Ya) by [Muhammad Moin](https://www.youtube.com/@muhammadmoinfaisal)
- ⛓ [Build Chat PDF app in Python with LangChain, OpenAI, Streamlit | Full project | Learn Coding](https://www.youtube.com/watch?v=WYzFzZg4YZI) by [Jutsupoint](https://www.youtube.com/@JutsuPoint)
- ⛓ [Build Eminem Bot App with LangChain, Streamlit, OpenAI | Full Python Project | Tutorial | AI ChatBot](https://www.youtube.com/watch?v=a2shHB4MRZ4) by [Jutsupoint](https://www.youtube.com/@JutsuPoint)
### [Prompt Engineering and LangChain](https://www.youtube.com/watch?v=muXbPpG_ys4&list=PLEJK-H61Xlwzm5FYLDdKt_6yibO33zoMW) by [Venelin Valkov](https://www.youtube.com/@venelin_valkov)
@@ -132,4 +134,4 @@
---------------------
⛓ icon marks a new addition [last update 2023-09-21]
⛓ icon marks a new addition [last update 2024-02-04]

View File

@@ -3,24 +3,68 @@ sidebar_position: 3
---
# Contribute Documentation
The docs directory contains Documentation and API Reference.
LangChain documentation consists of two components:
Documentation is built using [Quarto](https://quarto.org) and [Docusaurus 2](https://docusaurus.io/).
1. Main Documentation: Hosted at [python.langchain.com](https://python.langchain.com/),
this comprehensive resource serves as the primary user-facing documentation.
It covers a wide array of topics, including tutorials, use cases, integrations,
and more, offering extensive guidance on building with LangChain.
The content for this documentation lives in the `/docs` directory of the monorepo.
2. In-code Documentation: This is documentation of the codebase itself, which is also
used to generate the externally facing [API Reference](https://api.python.langchain.com/en/latest/langchain_api_reference.html).
The content for the API reference is autogenerated by scanning the docstrings in the codebase. For this reason we ask that
developers document their code well.
API Reference are largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code and are hosted by [Read the Docs](https://readthedocs.org/).
For that reason, we ask that you add good documentation to all classes and methods.
The main documentation is built using [Quarto](https://quarto.org) and [Docusaurus 2](https://docusaurus.io/).
Similar to linting, we recognize documentation can be annoying. If you do not want to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
The `API Reference` is largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/)
from the code and is hosted by [Read the Docs](https://readthedocs.org/).
## Build Documentation Locally
We appreciate all contributions to the documentation, whether it be fixing a typo,
adding a new tutorial or example and whether it be in the main documentation or the API Reference.
Similar to linting, we recognize documentation can be annoying. If you do not want
to do it, please contact a project maintainer, and they can help you with it. We do not want this to be a blocker for good code getting contributed.
## 📜 Main Documentation
The content for the main documentation is located in the `/docs` directory of the monorepo.
The documentation is written using a combination of ipython notebooks (`.ipynb` files)
and markdown (`.mdx` files). The notebooks are converted to markdown
using [Quarto](https://quarto.org) and then built using [Docusaurus 2](https://docusaurus.io/).
Feel free to make contributions to the main documentation! 🥰
After modifying the documentation:
1. Run the linting and formatting commands (see below) to ensure that the documentation is well-formatted and free of errors.
2. Optionally build the documentation locally to verify that the changes look good.
3. Make a pull request with the changes.
4. You can preview and verify that the changes are what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page. This will take you to a preview of the documentation changes.
## ⚒️ Linting and Building Documentation Locally
After writing up the documentation, you may want to lint and build the documentation
locally to ensure that it looks good and is free of errors.
If you're unable to build it locally that's okay as well, as you will be able to
see a preview of the documentation on the pull request page.
### Install dependencies
- [Quarto](https://quarto.org) - package that converts Jupyter notebooks (`.ipynb` files) into mdx files for serving in Docusaurus.
- `poetry install` from the monorepo root
- [Quarto](https://quarto.org) - package that converts Jupyter notebooks (`.ipynb` files) into mdx files for serving in Docusaurus. [Download link](https://quarto.org/docs/download/).
From the **monorepo root**, run the following command to install the dependencies:
```bash
poetry install --with lint,docs --no-root
````
### Building
The code that builds the documentation is located in the `/docs` directory of the monorepo.
In the following commands, the prefix `api_` indicates that those are operations for the API Reference.
Before building the documentation, it is always a good idea to clean the build directory:
@@ -46,10 +90,9 @@ make api_docs_linkcheck
### Linting and Formatting
The docs are linted from the monorepo root. To lint the docs, run the following from there:
The Main Documentation is linted from the **monorepo root**. To lint the main documentation, run the following from there:
```bash
poetry install --with lint,typing
make lint
```
@@ -57,9 +100,73 @@ If you have formatting-related errors, you can fix them automatically with:
```bash
make format
```
```
## Verify Documentation changes
## ⌨️ In-code Documentation
The in-code documentation is largely autogenerated by [sphinx](https://www.sphinx-doc.org/en/master/) from the code and is hosted by [Read the Docs](https://readthedocs.org/).
For the API reference to be useful, the codebase must be well-documented. This means that all functions, classes, and methods should have a docstring that explains what they do, what the arguments are, and what the return value is. This is a good practice in general, but it is especially important for LangChain because the API reference is the primary resource for developers to understand how to use the codebase.
We generally follow the [Google Python Style Guide](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings) for docstrings.
Here is an example of a well-documented function:
```python
def my_function(arg1: int, arg2: str) -> float:
"""This is a short description of the function. (It should be a single sentence.)
This is a longer description of the function. It should explain what
the function does, what the arguments are, and what the return value is.
It should wrap at 88 characters.
Examples:
This is a section for examples of how to use the function.
.. code-block:: python
my_function(1, "hello")
Args:
arg1: This is a description of arg1. We do not need to specify the type since
it is already specified in the function signature.
arg2: This is a description of arg2.
Returns:
This is a description of the return value.
"""
return 3.14
```
### Linting and Formatting
The in-code documentation is linted from the directories belonging to the packages
being documented.
For example, if you're working on the `langchain-community` package, you would change
the working directory to the `langchain-community` directory:
```bash
cd [root]/libs/langchain-community
```
Set up a virtual environment for the package if you haven't done so already.
Install the dependencies for the package.
```bash
poetry install --with lint
```
Then you can run the following commands to lint and format the in-code documentation:
```bash
make format
make lint
```
## Verify Documentation Changes
After pushing documentation changes to the repository, you can preview and verify that the changes are
what you wanted by clicking the `View deployment` or `Visit Preview` buttons on the pull request `Conversation` page.

View File

@@ -15,8 +15,9 @@ There are many ways to contribute to LangChain. Here are some common ways people
- [**Documentation**](./documentation.mdx): Help improve our docs, including this one!
- [**Code**](./code.mdx): Help us write code, fix bugs, or improve our infrastructure.
- [**Integrations**](integrations.mdx): Help us integrate with your favorite vendors and tools.
- [**Discussions**](https://github.com/langchain-ai/langchain/discussions): Help answer usage questions and discuss issues with users.
### 🚩GitHub Issues
### 🚩 GitHub Issues
Our [issues](https://github.com/langchain-ai/langchain/issues) page is kept up to date with bugs, improvements, and feature requests.
@@ -31,7 +32,13 @@ We will try to keep these issues as up-to-date as possible, though
with the rapid rate of development in this field some may get out of date.
If you notice this happening, please let us know.
### 🙋Getting Help
### 💭 GitHub Discussions
We have a [discussions](https://github.com/langchain-ai/langchain/discussions) page where users can ask usage questions, discuss design decisions, and propose new features.
If you are able to help answer questions, please do so! This will allow the maintainers to spend more time focused on development and bug fixing.
### 🙋 Getting Help
Our goal is to have the simplest developer setup possible. Should you experience any difficulty getting setup, please
contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is

View File

@@ -0,0 +1,54 @@
---
sidebar_position: 0.5
---
# Repository Structure
If you plan on contributing to LangChain code or documentation, it can be useful
to understand the high level structure of the repository.
LangChain is organized as a [monorep](https://en.wikipedia.org/wiki/Monorepo) that contains multiple packages.
Here's the structure visualized as a tree:
```text
.
├── cookbook # Tutorials and examples
├── docs # Contains content for the documentation here: https://python.langchain.com/
├── libs
│ ├── langchain # Main package
│ │ ├── tests/unit_tests # Unit tests (present in each package not shown for brevity)
│ │ ├── tests/integration_tests # Integration tests (present in each package not shown for brevity)
│ ├── langchain-community # Third-party integrations
│ ├── langchain-core # Base interfaces for key abstractions
│ ├── langchain-experimental # Experimental components and chains
│ ├── partners
│ ├── langchain-partner-1
│ ├── langchain-partner-2
│ ├── ...
├── templates # A collection of easily deployable reference architectures for a wide variety of tasks.
```
The root directory also contains the following files:
* `pyproject.toml`: Dependencies for building docs and linting docs, cookbook.
* `Makefile`: A file that contains shortcuts for building, linting and docs and cookbook.
There are other files in the root directory level, but their presence should be self-explanatory. Feel free to browse around!
## Documentation
The `/docs` directory contains the content for the documentation that is shown
at https://python.langchain.com/ and the associated API Reference https://api.python.langchain.com/en/latest/langchain_api_reference.html.
See the [documentation](./documentation) guidelines to learn how to contribute to the documentation.
## Code
The `/libs` directory contains the code for the LangChain packages.
To learn more about how to contribute code see the following guidelines:
- [Code](./code.mdx) Learn how to develop in the LangChain codebase.
- [Integrations](./integrations.mdx) to learn how to contribute to third-party integrations to langchain-community or to start a new partner package.
- [Testing](./testing.mdx) guidelines to learn how to write tests for the packages.

View File

@@ -7,7 +7,7 @@
"source": [
"# Agents\n",
"\n",
"You can pass a Runnable into an agent."
"You can pass a Runnable into an agent. Make sure you have `langchainhub` installed: `pip install langchainhub`"
]
},
{
@@ -98,7 +98,7 @@
"source": [
"Building an agent from a runnable usually involves a few things:\n",
"\n",
"1. Data processing for the intermediate steps. These need to represented in a way that the language model can recognize them. This should be pretty tightly coupled to the instructions in the prompt\n",
"1. Data processing for the intermediate steps. These need to be represented in a way that the language model can recognize them. This should be pretty tightly coupled to the instructions in the prompt\n",
"\n",
"2. The prompt itself\n",
"\n",

View File

@@ -47,7 +47,7 @@
"source": [
"from operator import itemgetter\n",
"\n",
"from langchain.schema import StrOutputParser\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_openai import ChatOpenAI\n",
"\n",

View File

@@ -169,8 +169,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import format_document\n",
"from langchain_core.messages import AIMessage, HumanMessage, get_buffer_string\n",
"from langchain_core.prompts import format_document\n",
"from langchain_core.runnables import RunnableParallel"
]
},

View File

@@ -31,9 +31,11 @@
]
},
{
"cell_type": "raw",
"cell_type": "code",
"execution_count": null,
"id": "278b0027",
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain-core langchain-community langchain-openai"
]

View File

@@ -29,7 +29,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.schema import StrOutputParser\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI"

View File

@@ -557,7 +557,7 @@
"metadata": {},
"outputs": [],
"source": [
"openai_poem = chain.with_config(configurable={\"llm\": \"openai\"})"
"openai_joke = chain.with_config(configurable={\"llm\": \"openai\"})"
]
},
{
@@ -578,7 +578,7 @@
}
],
"source": [
"openai_poem.invoke({\"topic\": \"bears\"})"
"openai_joke.invoke({\"topic\": \"bears\"})"
]
},
{

View File

@@ -7,7 +7,7 @@
"source": [
"# Add message history (memory)\n",
"\n",
"The `RunnableWithMessageHistory` let's us add message history to certain types of chains.\n",
"The `RunnableWithMessageHistory` lets us add message history to certain types of chains. It wraps another Runnable and manages the chat message history for it.\n",
"\n",
"Specifically, it can be used for any Runnable that takes as input one of\n",
"\n",
@@ -21,7 +21,379 @@
"* a sequence of `BaseMessage`\n",
"* a dict with a key that contains a sequence of `BaseMessage`\n",
"\n",
"Let's take a look at some examples to see how it works."
"Let's take a look at some examples to see how it works. First we construct a runnable (which here accepts a dict as input and returns a message as output):"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2ed413b4-33a1-48ee-89b0-2d4917ec101a",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_openai.chat_models import ChatOpenAI\n",
"\n",
"model = ChatOpenAI()\n",
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\n",
" \"system\",\n",
" \"You're an assistant who's good at {ability}. Respond in 20 words or fewer\",\n",
" ),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{input}\"),\n",
" ]\n",
")\n",
"runnable = prompt | model"
]
},
{
"cell_type": "markdown",
"id": "9fd175e1-c7b8-4929-a57e-3331865fe7aa",
"metadata": {},
"source": [
"To manage the message history, we will need:\n",
"1. This runnable;\n",
"2. A callable that returns an instance of `BaseChatMessageHistory`.\n",
"\n",
"Check out the [memory integrations](https://integrations.langchain.com/memory) page for implementations of chat message histories using Redis and other providers. Here we demonstrate using an in-memory `ChatMessageHistory` as well as more persistent storage using `RedisChatMessageHistory`."
]
},
{
"cell_type": "markdown",
"id": "3d83adad-9672-496d-9f25-5747e7b8c8bb",
"metadata": {},
"source": [
"## In-memory\n",
"\n",
"Below we show a simple example in which the chat history lives in memory, in this case via a global Python dict.\n",
"\n",
"We construct a callable `get_session_history` that references this dict to return an instance of `ChatMessageHistory`. The arguments to the callable can be specified by passing a configuration to the `RunnableWithMessageHistory` at runtime. By default, the configuration parameter is expected to be a single string `session_id`. This can be adjusted via the `history_factory_config` kwarg.\n",
"\n",
"Using the single-parameter default:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "54348d02-d8ee-440c-bbf9-41bc0fbbc46c",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_message_histories import ChatMessageHistory\n",
"from langchain_core.chat_history import BaseChatMessageHistory\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"\n",
"store = {}\n",
"\n",
"\n",
"def get_session_history(session_id: str) -> BaseChatMessageHistory:\n",
" if session_id not in store:\n",
" store[session_id] = ChatMessageHistory()\n",
" return store[session_id]\n",
"\n",
"\n",
"with_message_history = RunnableWithMessageHistory(\n",
" runnable,\n",
" get_session_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"history\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "01acb505-3fd3-4ab4-9f04-5ea07e81542e",
"metadata": {},
"source": [
"Note that we've specified `input_messages_key` (the key to be treated as the latest input message) and `history_messages_key` (the key to add historical messages to).\n",
"\n",
"When invoking this new runnable, we specify the corresponding chat history via a configuration parameter:"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "01384412-f08e-4634-9edb-3f46f475b582",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Cosine is a trigonometric function that calculates the ratio of the adjacent side to the hypotenuse of a right triangle.')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"What does cosine mean?\"},\n",
" config={\"configurable\": {\"session_id\": \"abc123\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "954688a2-9a3f-47ee-a9e8-fa0c83e69477",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='Cosine is a mathematical function used to calculate the length of a side in a right triangle.')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Remembers\n",
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"What?\"},\n",
" config={\"configurable\": {\"session_id\": \"abc123\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "39350d7c-2641-4744-bc2a-fd6a57c4ea90",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content='I can help with math problems. What do you need assistance with?')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# New session_id --> does not remember.\n",
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"What?\"},\n",
" config={\"configurable\": {\"session_id\": \"def234\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d29497be-3366-408d-bbb9-d4a8bf4ef37c",
"metadata": {},
"source": [
"The configuration parameters by which we track message histories can be customized by passing in a list of ``ConfigurableFieldSpec`` objects to the ``history_factory_config`` parameter. Below, we use two parameters: a `user_id` and `conversation_id`."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1c89daee-deff-4fdf-86a3-178f7d8ef536",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import ConfigurableFieldSpec\n",
"\n",
"store = {}\n",
"\n",
"\n",
"def get_session_history(user_id: str, conversation_id: str) -> BaseChatMessageHistory:\n",
" if (user_id, conversation_id) not in store:\n",
" store[(user_id, conversation_id)] = ChatMessageHistory()\n",
" return store[(user_id, conversation_id)]\n",
"\n",
"\n",
"with_message_history = RunnableWithMessageHistory(\n",
" runnable,\n",
" get_session_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"history\",\n",
" history_factory_config=[\n",
" ConfigurableFieldSpec(\n",
" id=\"user_id\",\n",
" annotation=str,\n",
" name=\"User ID\",\n",
" description=\"Unique identifier for the user.\",\n",
" default=\"\",\n",
" is_shared=True,\n",
" ),\n",
" ConfigurableFieldSpec(\n",
" id=\"conversation_id\",\n",
" annotation=str,\n",
" name=\"Conversation ID\",\n",
" description=\"Unique identifier for the conversation.\",\n",
" default=\"\",\n",
" is_shared=True,\n",
" ),\n",
" ],\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "65c5622e-09b8-4f2f-8c8a-2dab0fd040fa",
"metadata": {},
"outputs": [],
"source": [
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"Hello\"},\n",
" config={\"configurable\": {\"user_id\": \"123\", \"conversation_id\": \"1\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "18f1a459-3f88-4ee6-8542-76a907070dd6",
"metadata": {},
"source": [
"### Examples with runnables of different signatures\n",
"\n",
"The above runnable takes a dict as input and returns a BaseMessage. Below we show some alternatives."
]
},
{
"cell_type": "markdown",
"id": "48eae1bf-b59d-4a61-8e62-b6dbf667e866",
"metadata": {},
"source": [
"#### Messages input, dict output"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "17733d4f-3a32-4055-9d44-5d58b9446a26",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output_message': AIMessage(content=\"Simone de Beauvoir believed in the existence of free will. She argued that individuals have the ability to make choices and determine their own actions, even in the face of social and cultural constraints. She rejected the idea that individuals are purely products of their environment or predetermined by biology or destiny. Instead, she emphasized the importance of personal responsibility and the need for individuals to actively engage in creating their own lives and defining their own existence. De Beauvoir believed that freedom and agency come from recognizing one's own freedom and actively exercising it in the pursuit of personal and collective liberation.\")}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_core.runnables import RunnableParallel\n",
"\n",
"chain = RunnableParallel({\"output_message\": ChatOpenAI()})\n",
"\n",
"\n",
"def get_session_history(session_id: str) -> BaseChatMessageHistory:\n",
" if session_id not in store:\n",
" store[session_id] = ChatMessageHistory()\n",
" return store[session_id]\n",
"\n",
"\n",
"with_message_history = RunnableWithMessageHistory(\n",
" chain,\n",
" get_session_history,\n",
" output_messages_key=\"output_message\",\n",
")\n",
"\n",
"with_message_history.invoke(\n",
" [HumanMessage(content=\"What did Simone de Beauvoir believe about free will\")],\n",
" config={\"configurable\": {\"session_id\": \"baz\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "efb57ef5-91f9-426b-84b9-b77f071a9dd7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output_message': AIMessage(content='Simone de Beauvoir\\'s views on free will were closely aligned with those of her contemporary and partner Jean-Paul Sartre. Both de Beauvoir and Sartre were existentialist philosophers who emphasized the importance of individual freedom and the rejection of determinism. They believed that human beings have the capacity to transcend their circumstances and create their own meaning and values.\\n\\nSartre, in his famous work \"Being and Nothingness,\" argued that human beings are condemned to be free, meaning that we are burdened with the responsibility of making choices and defining ourselves in a world that lacks inherent meaning. Like de Beauvoir, Sartre believed that individuals have the ability to exercise their freedom and make choices in the face of external and internal constraints.\\n\\nWhile there may be some nuanced differences in their philosophical writings, overall, de Beauvoir and Sartre shared a similar belief in the existence of free will and the importance of individual agency in shaping one\\'s own life.')}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with_message_history.invoke(\n",
" [HumanMessage(content=\"How did this compare to Sartre\")],\n",
" config={\"configurable\": {\"session_id\": \"baz\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "a39eac5f-a9d8-4729-be06-5e7faf0c424d",
"metadata": {},
"source": [
"#### Messages input, messages output"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e45bcd95-e31f-4a9a-967a-78f96e8da881",
"metadata": {},
"outputs": [],
"source": [
"RunnableWithMessageHistory(\n",
" ChatOpenAI(),\n",
" get_session_history,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "04daa921-a2d1-40f9-8cd1-ae4e9a4163a7",
"metadata": {},
"source": [
"#### Dict with single key for all messages input, messages output"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "27157f15-9fb0-4167-9870-f4d7f234b3cb",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"RunnableWithMessageHistory(\n",
" itemgetter(\"input_messages\") | ChatOpenAI(),\n",
" get_session_history,\n",
" input_messages_key=\"input_messages\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "418ca7af-9ed9-478c-8bca-cba0de2ca61e",
"metadata": {},
"source": [
"## Persistent storage"
]
},
{
"cell_type": "markdown",
"id": "76799a13-d99a-4c4f-91f2-db699e40b8df",
"metadata": {},
"source": [
"In many cases it is preferable to persist conversation histories. `RunnableWithMessageHistory` is agnostic as to how the `get_session_history` callable retrieves its chat message histories. See [here](https://github.com/langchain-ai/langserve/blob/main/examples/chat_with_persistence_and_user/server.py) for an example using a local filesystem. Below we demonstrate how one could use Redis. Check out the [memory integrations](https://integrations.langchain.com/memory) page for implementations of chat message histories using other providers."
]
},
{
@@ -29,9 +401,9 @@
"id": "6bca45e5-35d9-4603-9ca9-6ac0ce0e35cd",
"metadata": {},
"source": [
"## Setup\n",
"### Setup\n",
"\n",
"We'll use Redis to store our chat message histories and Anthropic's claude-2 model so we'll need to install the following dependencies:"
"We'll need to install Redis if it's not installed already:"
]
},
{
@@ -41,28 +413,7 @@
"metadata": {},
"outputs": [],
"source": [
"%pip install --upgrade --quiet langchain redis anthropic"
]
},
{
"cell_type": "markdown",
"id": "93776323-d6b8-4912-bb6a-867c5e655f46",
"metadata": {},
"source": [
"Set your [Anthropic API key](https://console.anthropic.com/):"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "c7f56f69-d2f1-4a21-990c-b5551eb012fa",
"metadata": {},
"outputs": [],
"source": [
"import getpass\n",
"import os\n",
"\n",
"os.environ[\"ANTHROPIC_API_KEY\"] = getpass.getpass()"
"%pip install --upgrade --quiet redis"
]
},
{
@@ -78,7 +429,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 9,
"id": "cd6a250e-17fe-4368-a39d-1fe6b2cbde68",
"metadata": {},
"outputs": [],
@@ -110,77 +461,32 @@
"# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "1a5a632e-ba9e-4488-b586-640ad5494f62",
"metadata": {},
"source": [
"## Example: Dict input, message output\n",
"\n",
"Let's create a simple chain that takes a dict as input and returns a BaseMessage.\n",
"\n",
"In this case the `\"question\"` key in the input represents our input message, and the `\"history\"` key is where our historical messages will be injected."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "2a150d6f-8878-4950-8634-a608c5faad56",
"metadata": {},
"outputs": [],
"source": [
"from typing import Optional\n",
"\n",
"from langchain_community.chat_message_histories import RedisChatMessageHistory\n",
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_core.chat_history import BaseChatMessageHistory\n",
"from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "3185edba-4eb6-4b32-80c6-577c0d19af97",
"metadata": {},
"outputs": [],
"source": [
"prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", \"You're an assistant who's good at {ability}\"),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"\n",
"chain = prompt | ChatAnthropic(model=\"claude-2\")"
]
},
{
"cell_type": "markdown",
"id": "f9d81796-ce61-484c-89e2-6c567d5e54ef",
"metadata": {},
"source": [
"### Adding message history\n",
"\n",
"To add message history to our original chain we wrap it in the `RunnableWithMessageHistory` class.\n",
"\n",
"Crucially, we also need to define a method that takes a session_id string and based on it returns a `BaseChatMessageHistory`. Given the same input, this method should return an equivalent output.\n",
"\n",
"In this case we'll also want to specify `input_messages_key` (the key to be treated as the latest input message) and `history_messages_key` (the key to add historical messages to)."
"Updating the message history implementation just requires us to define a new callable, this time returning an instance of `RedisChatMessageHistory`:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 10,
"id": "ca7c64d8-e138-4ef8-9734-f82076c47d80",
"metadata": {},
"outputs": [],
"source": [
"chain_with_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: RedisChatMessageHistory(session_id, url=REDIS_URL),\n",
" input_messages_key=\"question\",\n",
"from langchain_community.chat_message_histories import RedisChatMessageHistory\n",
"\n",
"\n",
"def get_message_history(session_id: str) -> RedisChatMessageHistory:\n",
" return RedisChatMessageHistory(session_id, url=REDIS_URL)\n",
"\n",
"\n",
"with_message_history = RunnableWithMessageHistory(\n",
" runnable,\n",
" get_message_history,\n",
" input_messages_key=\"input\",\n",
" history_messages_key=\"history\",\n",
")"
]
@@ -190,60 +496,53 @@
"id": "37eefdec-9901-4650-b64c-d3c097ed5f4d",
"metadata": {},
"source": [
"## Invoking with config\n",
"\n",
"Whenever we call our chain with message history, we need to include a config that contains the `session_id`\n",
"```python\n",
"config={\"configurable\": {\"session_id\": \"<SESSION_ID>\"}}\n",
"```\n",
"\n",
"Given the same configuration, our chain should be pulling from the same chat message history."
"We can invoke as before:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 11,
"id": "a85bcc22-ca4c-4ad5-9440-f94be7318f3e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' Cosine is one of the basic trigonometric functions in mathematics. It is defined as the ratio of the adjacent side to the hypotenuse in a right triangle.\\n\\nSome key properties and facts about cosine:\\n\\n- It is denoted by cos(θ), where θ is the angle in a right triangle. \\n\\n- The cosine of an acute angle is always positive. For angles greater than 90 degrees, cosine can be negative.\\n\\n- Cosine is one of the three main trig functions along with sine and tangent.\\n\\n- The cosine of 0 degrees is 1. As the angle increases towards 90 degrees, the cosine value decreases towards 0.\\n\\n- The range of values for cosine is -1 to 1.\\n\\n- The cosine function maps angles in a circle to the x-coordinate on the unit circle.\\n\\n- Cosine is used to find adjacent side lengths in right triangles, and has many other applications in mathematics, physics, engineering and more.\\n\\n- Key cosine identities include: cos(A+B) = cosAcosB sinAsinB and cos(2A) = cos^2(A) sin^2(A)\\n\\nSo in summary, cosine is a fundamental trig')"
"AIMessage(content='Cosine is a trigonometric function that represents the ratio of the adjacent side to the hypotenuse in a right triangle.')"
]
},
"execution_count": 7,
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_history.invoke(\n",
" {\"ability\": \"math\", \"question\": \"What does cosine mean?\"},\n",
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"What does cosine mean?\"},\n",
" config={\"configurable\": {\"session_id\": \"foobar\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 12,
"id": "ab29abd3-751f-41ce-a1b0-53f6b565e79d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' The inverse of the cosine function is called the arccosine or inverse cosine, often denoted as cos-1(x) or arccos(x).\\n\\nThe key properties and facts about arccosine:\\n\\n- It is defined as the angle θ between 0 and π radians whose cosine is x. So arccos(x) = θ such that cos(θ) = x.\\n\\n- The range of arccosine is 0 to π radians (0 to 180 degrees).\\n\\n- The domain of arccosine is -1 to 1. \\n\\n- arccos(cos(θ)) = θ for values of θ from 0 to π radians.\\n\\n- arccos(x) is the angle in a right triangle whose adjacent side is x and hypotenuse is 1.\\n\\n- arccos(0) = 90 degrees. As x increases from 0 to 1, arccos(x) decreases from 90 to 0 degrees.\\n\\n- arccos(1) = 0 degrees. arccos(-1) = 180 degrees.\\n\\n- The graph of y = arccos(x) is part of the unit circle, restricted to x')"
"AIMessage(content='The inverse of cosine is the arccosine function, denoted as acos or cos^-1, which gives the angle corresponding to a given cosine value.')"
]
},
"execution_count": 8,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_history.invoke(\n",
" {\"ability\": \"math\", \"question\": \"What's its inverse\"},\n",
"with_message_history.invoke(\n",
" {\"ability\": \"math\", \"input\": \"What's its inverse\"},\n",
" config={\"configurable\": {\"session_id\": \"foobar\"}},\n",
")"
]
@@ -255,7 +554,7 @@
"source": [
":::tip\n",
"\n",
"[Langsmith trace](https://smith.langchain.com/public/863a003b-7ca8-4b24-be9e-d63ec13c106e/r)\n",
"[Langsmith trace](https://smith.langchain.com/public/bd73e122-6ec1-48b2-82df-e6483dc9cb63/r)\n",
"\n",
":::"
]
@@ -267,124 +566,13 @@
"source": [
"Looking at the Langsmith trace for the second call, we can see that when constructing the prompt, a \"history\" variable has been injected which is a list of two messages (our first input and first output)."
]
},
{
"cell_type": "markdown",
"id": "028cf151-6cd5-4533-b3cf-c8d735554647",
"metadata": {},
"source": [
"## Example: messages input, dict output"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "0bb446b5-6251-45fe-a92a-4c6171473c53",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output_message': AIMessage(content=' Here is a summary of Simone de Beauvoir\\'s views on free will:\\n\\n- De Beauvoir was an existentialist philosopher and believed strongly in the concept of free will. She rejected the idea that human nature or instincts determine behavior.\\n\\n- Instead, de Beauvoir argued that human beings define their own essence or nature through their actions and choices. As she famously wrote, \"One is not born, but rather becomes, a woman.\"\\n\\n- De Beauvoir believed that while individuals are situated in certain cultural contexts and social conditions, they still have agency and the ability to transcend these situations. Freedom comes from choosing one\\'s attitude toward these constraints.\\n\\n- She emphasized the radical freedom and responsibility of the individual. We are \"condemned to be free\" because we cannot escape making choices and taking responsibility for our choices. \\n\\n- De Beauvoir felt that many people evade their freedom and responsibility by adopting rigid mindsets, ideologies, or conforming uncritically to social roles.\\n\\n- She advocated for the recognition of ambiguity in the human condition and warned against the quest for absolute rules that deny freedom and responsibility. Authentic living involves embracing ambiguity.\\n\\nIn summary, de Beauvoir promoted an existential ethics')}"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain_core.messages import HumanMessage\n",
"from langchain_core.runnables import RunnableParallel\n",
"\n",
"chain = RunnableParallel({\"output_message\": ChatAnthropic(model=\"claude-2\")})\n",
"chain_with_history = RunnableWithMessageHistory(\n",
" chain,\n",
" lambda session_id: RedisChatMessageHistory(session_id, url=REDIS_URL),\n",
" output_messages_key=\"output_message\",\n",
")\n",
"\n",
"chain_with_history.invoke(\n",
" [HumanMessage(content=\"What did Simone de Beauvoir believe about free will\")],\n",
" config={\"configurable\": {\"session_id\": \"baz\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "601ce3ff-aea8-424d-8e54-fd614256af4f",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'output_message': AIMessage(content=\" There are many similarities between Simone de Beauvoir's views on free will and those of Jean-Paul Sartre, though some key differences emerge as well:\\n\\nSimilarities with Sartre:\\n\\n- Both were existentialist thinkers who rejected determinism and emphasized human freedom and responsibility.\\n\\n- They agreed that existence precedes essence - there is no predefined human nature that determines who we are.\\n\\n- Individuals must define themselves through their choices and actions. This leads to anxiety but also freedom.\\n\\n- The human condition is characterized by ambiguity and uncertainty, rather than fixed meanings/values.\\n\\n- Both felt that most people evade their freedom through self-deception, conformity, or adopting collective identities/values uncritically.\\n\\nDifferences from Sartre: \\n\\n- Sartre placed more emphasis on the burden and anguish of radical freedom. De Beauvoir focused more on its positive potential.\\n\\n- De Beauvoir critiqued Sartre's premise that human relations are necessarily conflictual. She saw more potential for mutual recognition.\\n\\n- Sartre saw the Other's gaze as a threat to freedom. De Beauvoir put more stress on how the Other's gaze can confirm\")}"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain_with_history.invoke(\n",
" [HumanMessage(content=\"How did this compare to Sartre\")],\n",
" config={\"configurable\": {\"session_id\": \"baz\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b898d1b1-11e6-4d30-a8dd-cc5e45533611",
"metadata": {},
"source": [
":::tip\n",
"\n",
"[LangSmith trace](https://smith.langchain.com/public/f6c3e1d1-a49d-4955-a9fa-c6519df74fa7/r)\n",
"\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "1724292c-01c6-44bb-83e8-9cdb6bf01483",
"metadata": {},
"source": [
"## More examples\n",
"\n",
"We could also do any of the below:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fd89240b-5a25-48f8-9568-5c1127f9ffad",
"metadata": {},
"outputs": [],
"source": [
"from operator import itemgetter\n",
"\n",
"# messages in, messages out\n",
"RunnableWithMessageHistory(\n",
" ChatAnthropic(model=\"claude-2\"),\n",
" lambda session_id: RedisChatMessageHistory(session_id, url=REDIS_URL),\n",
")\n",
"\n",
"# dict with single key for all messages in, messages out\n",
"RunnableWithMessageHistory(\n",
" itemgetter(\"input_messages\") | ChatAnthropic(model=\"claude-2\"),\n",
" lambda session_id: RedisChatMessageHistory(session_id, url=REDIS_URL),\n",
" input_messages_key=\"input_messages\",\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
@@ -396,7 +584,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
"version": "3.10.13"
}
},
"nbformat": 4,

View File

@@ -1,7 +1,7 @@
{
"cells": [
{
"cell_type": "markdown",
"cell_type": "raw",
"id": "9e45e81c-e16e-4c6c-b6a3-2362e5193827",
"metadata": {},
"source": [
@@ -25,53 +25,42 @@
"\n",
"There are two ways to perform routing:\n",
"\n",
"1. Using a `RunnableBranch`.\n",
"2. Writing custom factory function that takes the input of a previous step and returns a **runnable**. Importantly, this should return a **runnable** and NOT actually execute.\n",
"1. Conditionally return runnables from a [`RunnableLambda`](./functions) (recommended)\n",
"2. Using a `RunnableBranch`.\n",
"\n",
"We'll illustrate both methods using a two step sequence where the first step classifies an input question as being about `LangChain`, `Anthropic`, or `Other`, then routes to a corresponding prompt chain."
]
},
{
"cell_type": "markdown",
"id": "f885113d",
"metadata": {},
"source": [
"## Using a RunnableBranch\n",
"\n",
"A `RunnableBranch` is initialized with a list of (condition, runnable) pairs and a default runnable. It selects which branch by passing each condition the input it's invoked with. It selects the first condition to evaluate to True, and runs the corresponding runnable to that condition with the input. \n",
"\n",
"If no provided conditions match, it runs the default runnable.\n",
"\n",
"Here's an example of what it looks like in action:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1aa13c1d",
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_core.output_parsers import StrOutputParser"
]
},
{
"cell_type": "markdown",
"id": "ed84c59a",
"id": "c1c6edac",
"metadata": {},
"source": [
"## Example Setup\n",
"First, let's create a chain that will identify incoming questions as being about `LangChain`, `Anthropic`, or `Other`:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3ec03886",
"execution_count": null,
"id": "8a8a1967",
"metadata": {},
"outputs": [],
"outputs": [
{
"data": {
"text/plain": [
"' Anthropic'"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import PromptTemplate\n",
"\n",
"chain = (\n",
" PromptTemplate.from_template(\n",
" \"\"\"Given the user question below, classify it as either being about `LangChain`, `Anthropic`, or `Other`.\n",
@@ -86,33 +75,14 @@
" )\n",
" | ChatAnthropic()\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "87ae7c1c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"' Anthropic'"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
")\n",
"\n",
"chain.invoke({\"question\": \"how do I call Anthropic?\"})"
]
},
{
"cell_type": "markdown",
"id": "8aa0a365",
"id": "7655555f",
"metadata": {},
"source": [
"Now, let's create three sub chains:"
@@ -120,8 +90,8 @@
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d479962a",
"execution_count": null,
"id": "89d7722d",
"metadata": {},
"outputs": [],
"source": [
@@ -158,101 +128,12 @@
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "593eab06",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.runnables import RunnableBranch\n",
"\n",
"branch = RunnableBranch(\n",
" (lambda x: \"anthropic\" in x[\"topic\"].lower(), anthropic_chain),\n",
" (lambda x: \"langchain\" in x[\"topic\"].lower(), langchain_chain),\n",
" general_chain,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "752c732e",
"metadata": {},
"outputs": [],
"source": [
"full_chain = {\"topic\": chain, \"question\": lambda x: x[\"question\"]} | branch"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "29231bb8",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" As Dario Amodei told me, here are some ways to use Anthropic:\\n\\n- Sign up for an account on Anthropic's website to access tools like Claude, Constitutional AI, and Writer. \\n\\n- Use Claude for tasks like email generation, customer service chat, and QA. Claude can understand natural language prompts and provide helpful responses.\\n\\n- Use Constitutional AI if you need an AI assistant that is harmless, honest, and helpful. It is designed to be safe and aligned with human values.\\n\\n- Use Writer to generate natural language content for things like marketing copy, stories, reports, and more. Give it a topic and prompt and it will create high-quality written content.\\n\\n- Check out Anthropic's documentation and blog for tips, tutorials, examples, and announcements about new capabilities as they continue to develop their AI technology.\\n\\n- Follow Anthropic on social media or subscribe to their newsletter to stay up to date on new features and releases.\\n\\n- For most people, the easiest way to leverage Anthropic's technology is through their website - just create an account to get started!\", additional_kwargs={}, example=False)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"full_chain.invoke({\"question\": \"how do I use Anthropic?\"})"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c67d8733",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' As Harrison Chase told me, here is how you use LangChain:\\n\\nLangChain is an AI assistant that can have conversations, answer questions, and generate text. To use LangChain, you simply type or speak your input and LangChain will respond. \\n\\nYou can ask LangChain questions, have discussions, get summaries or explanations about topics, and request it to generate text on a subject. Some examples of interactions:\\n\\n- Ask general knowledge questions and LangChain will try to answer factually. For example \"What is the capital of France?\"\\n\\n- Have conversations on topics by taking turns speaking. You can prompt the start of a conversation by saying something like \"Let\\'s discuss machine learning\"\\n\\n- Ask for summaries or high-level explanations on subjects. For example \"Can you summarize the main themes in Shakespeare\\'s Hamlet?\" \\n\\n- Give creative writing prompts or requests to have LangChain generate text in different styles. For example \"Write a short children\\'s story about a mouse\" or \"Generate a poem in the style of Robert Frost about nature\"\\n\\n- Correct LangChain if it makes an inaccurate statement and provide the right information. This helps train it.\\n\\nThe key is interacting naturally and giving it clear prompts and requests', additional_kwargs={}, example=False)"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"full_chain.invoke({\"question\": \"how do I use LangChain?\"})"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "935ad949",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' 2 + 2 = 4', additional_kwargs={}, example=False)"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"full_chain.invoke({\"question\": \"whats 2 + 2\"})"
]
},
{
"cell_type": "markdown",
"id": "6d8d042c",
"metadata": {},
"source": [
"## Using a custom function\n",
"## Using a custom function (Recommended)\n",
"\n",
"You can also use a custom function to route between different outputs. Here's an example:"
]
@@ -350,13 +231,89 @@
"full_chain.invoke({\"question\": \"whats 2 + 2\"})"
]
},
{
"cell_type": "markdown",
"id": "5147b827",
"metadata": {},
"source": [
"## Using a RunnableBranch\n",
"\n",
"A `RunnableBranch` is a special type of runnable that allows you to define a set of conditions and runnables to execute based on the input. It does **not** offer anything that you can't achieve in a custom function as described above, so we recommend using a custom function instead.\n",
"\n",
"A `RunnableBranch` is initialized with a list of (condition, runnable) pairs and a default runnable. It selects which branch by passing each condition the input it's invoked with. It selects the first condition to evaluate to True, and runs the corresponding runnable to that condition with the input. \n",
"\n",
"If no provided conditions match, it runs the default runnable.\n",
"\n",
"Here's an example of what it looks like in action:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "46802d04",
"id": "2a101418",
"metadata": {},
"outputs": [],
"source": []
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=\" As Dario Amodei told me, here are some ways to use Anthropic:\\n\\n- Sign up for an account on Anthropic's website to access tools like Claude, Constitutional AI, and Writer. \\n\\n- Use Claude for tasks like email generation, customer service chat, and QA. Claude can understand natural language prompts and provide helpful responses.\\n\\n- Use Constitutional AI if you need an AI assistant that is harmless, honest, and helpful. It is designed to be safe and aligned with human values.\\n\\n- Use Writer to generate natural language content for things like marketing copy, stories, reports, and more. Give it a topic and prompt and it will create high-quality written content.\\n\\n- Check out Anthropic's documentation and blog for tips, tutorials, examples, and announcements about new capabilities as they continue to develop their AI technology.\\n\\n- Follow Anthropic on social media or subscribe to their newsletter to stay up to date on new features and releases.\\n\\n- For most people, the easiest way to leverage Anthropic's technology is through their website - just create an account to get started!\", additional_kwargs={}, example=False)"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from langchain_core.runnables import RunnableBranch\n",
"\n",
"branch = RunnableBranch(\n",
" (lambda x: \"anthropic\" in x[\"topic\"].lower(), anthropic_chain),\n",
" (lambda x: \"langchain\" in x[\"topic\"].lower(), langchain_chain),\n",
" general_chain,\n",
")\n",
"full_chain = {\"topic\": chain, \"question\": lambda x: x[\"question\"]} | branch\n",
"full_chain.invoke({\"question\": \"how do I use Anthropic?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8d8caf9b",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' As Harrison Chase told me, here is how you use LangChain:\\n\\nLangChain is an AI assistant that can have conversations, answer questions, and generate text. To use LangChain, you simply type or speak your input and LangChain will respond. \\n\\nYou can ask LangChain questions, have discussions, get summaries or explanations about topics, and request it to generate text on a subject. Some examples of interactions:\\n\\n- Ask general knowledge questions and LangChain will try to answer factually. For example \"What is the capital of France?\"\\n\\n- Have conversations on topics by taking turns speaking. You can prompt the start of a conversation by saying something like \"Let\\'s discuss machine learning\"\\n\\n- Ask for summaries or high-level explanations on subjects. For example \"Can you summarize the main themes in Shakespeare\\'s Hamlet?\" \\n\\n- Give creative writing prompts or requests to have LangChain generate text in different styles. For example \"Write a short children\\'s story about a mouse\" or \"Generate a poem in the style of Robert Frost about nature\"\\n\\n- Correct LangChain if it makes an inaccurate statement and provide the right information. This helps train it.\\n\\nThe key is interacting naturally and giving it clear prompts and requests', additional_kwargs={}, example=False)"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"full_chain.invoke({\"question\": \"how do I use LangChain?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "26159af7",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"AIMessage(content=' 2 + 2 = 4', additional_kwargs={}, example=False)"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"full_chain.invoke({\"question\": \"whats 2 + 2\"})"
]
}
],
"metadata": {

View File

@@ -12,7 +12,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "bb7d49db-04d3-4399-bfe1-09f82bbe6015",
"metadata": {},
@@ -28,7 +27,7 @@
"1. sync `stream` and async `astream`: a **default implementation** of streaming that streams the **final output** from the chain.\n",
"2. async `astream_events` and async `astream_log`: these provide a way to stream both **intermediate steps** and **final output** from the chain.\n",
"\n",
"Let's take a look at both approaches, and try to understand a how to use them. 🥷\n",
"Let's take a look at both approaches, and try to understand how to use them. 🥷\n",
"\n",
"## Using Stream\n",
"\n",
@@ -48,7 +47,25 @@
"\n",
"Large language models can take **several seconds** to generate a complete response to a query. This is far slower than the **~200-300 ms** threshold at which an application feels responsive to an end user.\n",
"\n",
"The key strategy to make the application feel more responsive is to show intermediate progress; e.g., to stream the output from the model **token by token**."
"The key strategy to make the application feel more responsive is to show intermediate progress; viz., to stream the output from the model **token by token**."
]
},
{
"cell_type": "markdown",
"id": "9eb73e8b",
"metadata": {},
"source": [
"We will show examples of streaming using the chat model from [Anthropic](/docs/integrations/platforms/anthropic). To use the model, you will need to install the `langchain-anthropic` package. You can do this with the following command:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "cd351cf4",
"metadata": {},
"outputs": [],
"source": [
"pip install -qU langchain-anthropic"
]
},
{
@@ -66,7 +83,9 @@
}
],
"source": [
"from langchain.chat_models import ChatAnthropic\n",
"# Showing the example using anthropic, but you can use\n",
"# your favorite chat model!\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"model = ChatAnthropic()\n",
"\n",
@@ -150,7 +169,7 @@
"We will use `StrOutputParser` to parse the output from the model. This is a simple parser that extracts the `content` field from an `AIMessageChunk`, giving us the `token` returned by the model.\n",
"\n",
":::{.callout-tip}\n",
"LCEL is a *declarative* way to specify a \"program\" by chainining together different LangChain primitives. Chains created using LCEL benefit from an automatic implementation of `stream`, and `astream` allowing streaming of the final output. In fact, chains created with LCEL implement the entire standard Runnable interface.\n",
"LCEL is a *declarative* way to specify a \"program\" by chainining together different LangChain primitives. Chains created using LCEL benefit from an automatic implementation of `stream` and `astream` allowing streaming of the final output. In fact, chains created with LCEL implement the entire standard Runnable interface.\n",
":::"
]
},
@@ -164,9 +183,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
" Sure|,| here|'s| a| funny| joke| about| a| par|rot|:|\n",
" Here|'s| a| silly| joke| about| a| par|rot|:|\n",
"\n",
"Why| doesn|'t| a| par|rot| ever| get| hungry| at| night|?| Because| it| has| a| light| snack| before| bed|!||"
"What| kind| of| teacher| gives| good| advice|?| An| ap|-|parent| (|app|arent|)| one|!||"
]
}
],
@@ -228,40 +247,34 @@
"{'countries': [{}]}\n",
"{'countries': [{'name': ''}]}\n",
"{'countries': [{'name': 'France'}]}\n",
"{'countries': [{'name': 'France', 'population': ''}]}\n",
"{'countries': [{'name': 'France', 'population': '67'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': ''}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': ''}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': ''}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan', 'population': ''}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan', 'population': '126'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan', 'population': '126,'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan', 'population': '126,860'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan', 'population': '126,860,'}]}\n",
"{'countries': [{'name': 'France', 'population': '67,022,000'}, {'name': 'Spain', 'population': '46,754,784'}, {'name': 'Japan', 'population': '126,860,301'}]}\n"
"{'countries': [{'name': 'France', 'population': 67}]}\n",
"{'countries': [{'name': 'France', 'population': 6739}]}\n",
"{'countries': [{'name': 'France', 'population': 673915}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': ''}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Sp'}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain'}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 4675}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 467547}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {'name': ''}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {'name': 'Japan'}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {'name': 'Japan', 'population': 12}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {'name': 'Japan', 'population': 12647}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {'name': 'Japan', 'population': 1264764}]}\n",
"{'countries': [{'name': 'France', 'population': 67391582}, {'name': 'Spain', 'population': 46754778}, {'name': 'Japan', 'population': 126476461}]}\n"
]
}
],
"source": [
"from langchain_core.output_parsers import JsonOutputParser\n",
"from langchain_openai.chat_models import ChatOpenAI\n",
"\n",
"model = ChatOpenAI()\n",
"\n",
"chain = model | JsonOutputParser() # This parser only works with OpenAI right now\n",
"chain = (\n",
" model | JsonOutputParser()\n",
") # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some models\n",
"async for text in chain.astream(\n",
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'\n",
"):\n",
@@ -294,12 +307,14 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[None, None, None]|"
"['France', 'Spain', 'Japan']|"
]
}
],
"source": [
"from langchain_core.output_parsers import JsonOutputParser\n",
"from langchain_core.output_parsers import (\n",
" JsonOutputParser,\n",
")\n",
"\n",
"\n",
"# A function that operates on finalized inputs\n",
@@ -326,13 +341,12 @@
"chain = model | JsonOutputParser() | _extract_country_names\n",
"\n",
"async for text in chain.astream(\n",
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\"'\n",
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'\n",
"):\n",
" print(text, end=\"|\", flush=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "cab6dca2-2027-414d-a196-2db6e3ebb8a5",
"metadata": {},
@@ -348,7 +362,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 17,
"id": "15984b2b-315a-4119-945b-2a3dabea3082",
"metadata": {},
"outputs": [
@@ -356,7 +370,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"France|Spain|Japan|"
"France|Sp|Spain|Japan|"
]
}
],
@@ -392,11 +406,23 @@
"chain = model | JsonOutputParser() | _extract_country_names_streaming\n",
"\n",
"async for text in chain.astream(\n",
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\"'\n",
" 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'\n",
"):\n",
" print(text, end=\"|\", flush=True)"
]
},
{
"cell_type": "markdown",
"id": "d59823f5-9b9a-43c5-a213-34644e2f1d3d",
"metadata": {},
"source": [
":::{.callout-note}\n",
"Because the code above is relying on JSON auto-completion, you may see partial names of countries (e.g., `Sp` and `Spain`), which is not what one would want for an extraction result!\n",
"\n",
"We're focusing on streaming concepts, not necessarily the results of the chains.\n",
":::"
]
},
{
"cell_type": "markdown",
"id": "6adf65b7-aa47-4321-98c7-a0abe43b833a",
@@ -409,7 +435,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 7,
"id": "b9b1c00d-8b44-40d0-9e2b-8a70d238f82b",
"metadata": {},
"outputs": [
@@ -420,7 +446,7 @@
" Document(page_content='harrison likes spicy food')]]"
]
},
"execution_count": 8,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -454,18 +480,18 @@
"id": "6fd3e71b-439e-418f-8a8a-5232fba3d9fd",
"metadata": {},
"source": [
"Stream just yielded the final result from that component. \n",
"Stream just yielded the final result from that component.\n",
"\n",
"This is OK 🥹! Not all components have to implement streaming -- in some cases streaming is either unnecessary, difficult or just doesn't make sense.\n",
"\n",
":::{.callout-tip}\n",
"An LCEL chain constructed using using non-streaming components, will still be able to stream in a lot of cases, with streaming of partial output starting after the last non-streaming step in the chain.\n",
"An LCEL chain constructed using non-streaming components, will still be able to stream in a lot of cases, with streaming of partial output starting after the last non-streaming step in the chain.\n",
":::"
]
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 8,
"id": "957447e6-1e60-41ef-8c10-2654bd9e738d",
"metadata": {},
"outputs": [],
@@ -483,7 +509,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 9,
"id": "94e50b5d-bf51-4eee-9da0-ee40dd9ce42b",
"metadata": {},
"outputs": [
@@ -491,9 +517,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"|H|arrison| worked| at| Kens|ho|,| a| renowned| technology| company| known| for| revolution|izing| the| artificial| intelligence| industry|.\n",
"|K|ens|ho|,| located| in| the| heart| of| Silicon| Valley|,| is| famous| for| its| cutting|-edge| research| and| development| in| machine| learning|.\n",
"|With| its| state|-of|-the|-art| facilities| and| talented| team|,| Kens|ho| has| become| a| hub| for| innovation| and| a| sought|-after| workplace| for| tech| enthusiasts| like| Harrison|.||"
" Based| on| the| given| context|,| the| only| information| provided| about| where| Harrison| worked| is| that| he| worked| at| Ken|sh|o|.| Since| there| are| no| other| details| provided| about| Ken|sh|o|,| I| do| not| have| enough| information| to| write| 3| additional| made| up| sentences| about| this| place|.| I| can| only| state| that| Harrison| worked| at| Ken|sh|o|.||"
]
}
],
@@ -528,17 +552,17 @@
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 10,
"id": "61348df9-ec58-401e-be89-68a70042f88e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'0.1.14'"
"'0.1.18'"
]
},
"execution_count": 11,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@@ -604,7 +628,7 @@
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 11,
"id": "c00df46e-7f6b-4e06-8abf-801898c8d57f",
"metadata": {},
"outputs": [
@@ -612,7 +636,7 @@
"name": "stderr",
"output_type": "stream",
"text": [
"/home/eugene/.pyenv/versions/3.11.4/envs/langchain_3_11_4/lib/python3.11/site-packages/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: This API is in beta and may change in the future.\n",
"/home/eugene/src/langchain/libs/core/langchain_core/_api/beta_decorator.py:86: LangChainBetaWarning: This API is in beta and may change in the future.\n",
" warn_beta(\n"
]
}
@@ -634,7 +658,7 @@
"\n",
"This is a **beta API**, and we're almost certainly going to make some changes to it.\n",
"\n",
"This version parameter will allow us to mimimize such breaking changes to your code. \n",
"This version parameter will allow us to minimize such breaking changes to your code. \n",
"\n",
"In short, we are annoying you now, so we don't have to annoy you later.\n",
":::"
@@ -650,7 +674,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 12,
"id": "ce31b525-f47d-4828-85a7-912ce9f2e79b",
"metadata": {},
"outputs": [
@@ -658,26 +682,26 @@
"data": {
"text/plain": [
"[{'event': 'on_chat_model_start',\n",
" 'run_id': 'd78b4ffb-0eb1-499c-8a90-8e4a4aa2edae',\n",
" 'name': 'ChatOpenAI',\n",
" 'run_id': '555843ed-3d24-4774-af25-fbf030d5e8c4',\n",
" 'name': 'ChatAnthropic',\n",
" 'tags': [],\n",
" 'metadata': {},\n",
" 'data': {'input': 'hello'}},\n",
" {'event': 'on_chat_model_stream',\n",
" 'run_id': 'd78b4ffb-0eb1-499c-8a90-8e4a4aa2edae',\n",
" 'run_id': '555843ed-3d24-4774-af25-fbf030d5e8c4',\n",
" 'tags': [],\n",
" 'metadata': {},\n",
" 'name': 'ChatOpenAI',\n",
" 'data': {'chunk': AIMessageChunk(content='')}},\n",
" 'name': 'ChatAnthropic',\n",
" 'data': {'chunk': AIMessageChunk(content=' Hello')}},\n",
" {'event': 'on_chat_model_stream',\n",
" 'run_id': 'd78b4ffb-0eb1-499c-8a90-8e4a4aa2edae',\n",
" 'run_id': '555843ed-3d24-4774-af25-fbf030d5e8c4',\n",
" 'tags': [],\n",
" 'metadata': {},\n",
" 'name': 'ChatOpenAI',\n",
" 'data': {'chunk': AIMessageChunk(content='Hello')}}]"
" 'name': 'ChatAnthropic',\n",
" 'data': {'chunk': AIMessageChunk(content='!')}}]"
]
},
"execution_count": 13,
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
@@ -688,7 +712,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 13,
"id": "76cfe826-ee63-4310-ad48-55a95eb3b9d6",
"metadata": {},
"outputs": [
@@ -696,20 +720,20 @@
"data": {
"text/plain": [
"[{'event': 'on_chat_model_stream',\n",
" 'run_id': 'd78b4ffb-0eb1-499c-8a90-8e4a4aa2edae',\n",
" 'run_id': '555843ed-3d24-4774-af25-fbf030d5e8c4',\n",
" 'tags': [],\n",
" 'metadata': {},\n",
" 'name': 'ChatOpenAI',\n",
" 'name': 'ChatAnthropic',\n",
" 'data': {'chunk': AIMessageChunk(content='')}},\n",
" {'event': 'on_chat_model_end',\n",
" 'name': 'ChatOpenAI',\n",
" 'run_id': 'd78b4ffb-0eb1-499c-8a90-8e4a4aa2edae',\n",
" 'name': 'ChatAnthropic',\n",
" 'run_id': '555843ed-3d24-4774-af25-fbf030d5e8c4',\n",
" 'tags': [],\n",
" 'metadata': {},\n",
" 'data': {'output': AIMessageChunk(content='Hello! How can I assist you today?')}}]"
" 'data': {'output': AIMessageChunk(content=' Hello!')}}]"
]
},
"execution_count": 14,
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
@@ -730,12 +754,14 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 27,
"id": "4328c56c-a303-427b-b1f2-f354e9af555c",
"metadata": {},
"outputs": [],
"source": [
"chain = model | JsonOutputParser() # This parser only works with OpenAI right now\n",
"chain = (\n",
" model | JsonOutputParser()\n",
") # Due to a bug in older versions of Langchain, JsonOutputParser did not stream results from some models\n",
"\n",
"events = [\n",
" event\n",
@@ -762,7 +788,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 15,
"id": "8e66ea3d-a450-436a-aaac-d9478abc6c28",
"metadata": {},
"outputs": [
@@ -770,26 +796,26 @@
"data": {
"text/plain": [
"[{'event': 'on_chain_start',\n",
" 'run_id': 'aa992fb9-d79f-46f3-a857-ae4acad841c4',\n",
" 'run_id': 'b1074bff-2a17-458b-9e7b-625211710df4',\n",
" 'name': 'RunnableSequence',\n",
" 'tags': [],\n",
" 'metadata': {},\n",
" 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}},\n",
" {'event': 'on_chat_model_start',\n",
" 'name': 'ChatOpenAI',\n",
" 'run_id': 'c5406de5-0880-4829-ae26-bb565b404e27',\n",
" 'name': 'ChatAnthropic',\n",
" 'run_id': '6072be59-1f43-4f1c-9470-3b92e8406a99',\n",
" 'tags': ['seq:step:1'],\n",
" 'metadata': {},\n",
" 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`')]]}}},\n",
" {'event': 'on_parser_start',\n",
" 'name': 'JsonOutputParser',\n",
" 'run_id': '32b47794-8fb6-4ef4-8800-23ed6c3f4519',\n",
" 'run_id': 'bf978194-0eda-4494-ad15-3a5bfe69cd59',\n",
" 'tags': ['seq:step:2'],\n",
" 'metadata': {},\n",
" 'data': {}}]"
]
},
"execution_count": 16,
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
@@ -816,7 +842,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 16,
"id": "630c71d6-8d94-4ce0-a78a-f20e90f628df",
"metadata": {},
"outputs": [
@@ -824,29 +850,31 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Chat model chunk: ''\n",
"Chat model chunk: ' Here'\n",
"Chat model chunk: ' is'\n",
"Chat model chunk: ' the'\n",
"Chat model chunk: ' JSON'\n",
"Chat model chunk: ' with'\n",
"Chat model chunk: ' the'\n",
"Chat model chunk: ' requested'\n",
"Chat model chunk: ' countries'\n",
"Chat model chunk: ' and'\n",
"Chat model chunk: ' their'\n",
"Chat model chunk: ' populations'\n",
"Chat model chunk: ':'\n",
"Chat model chunk: '\\n\\n```'\n",
"Chat model chunk: 'json'\n",
"Parser chunk: {}\n",
"Chat model chunk: '{\\n'\n",
"Chat model chunk: ' '\n",
"Chat model chunk: '\\n{'\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: ' \"'\n",
"Chat model chunk: 'countries'\n",
"Chat model chunk: '\":'\n",
"Parser chunk: {'countries': []}\n",
"Chat model chunk: ' [\\n'\n",
"Chat model chunk: ' '\n",
"Chat model chunk: ' ['\n",
"Chat model chunk: '\\n '\n",
"Parser chunk: {'countries': [{}]}\n",
"Chat model chunk: ' {\\n'\n",
"Chat model chunk: ' '\n",
"Chat model chunk: ' \"'\n",
"Chat model chunk: 'name'\n",
"Chat model chunk: '\":'\n",
"Parser chunk: {'countries': [{'name': ''}]}\n",
"Chat model chunk: ' \"'\n",
"Parser chunk: {'countries': [{'name': 'France'}]}\n",
"Chat model chunk: 'France'\n",
"Chat model chunk: '\",\\n'\n",
"Chat model chunk: ' '\n",
"Chat model chunk: ' \"'\n",
"Chat model chunk: ' {'\n",
"...\n"
]
}
@@ -897,7 +925,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 17,
"id": "4f0b581b-be63-4663-baba-c6d2b625cdf9",
"metadata": {},
"outputs": [
@@ -905,17 +933,17 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_parser_start', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': []}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': ''}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France'}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 670}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 670600}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67060000}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67060000}, {}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': '450011c0-6f3b-4ec8-92d4-6603d9d1d603', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67060000}, {'name': ''}]}}}\n",
"{'event': 'on_parser_start', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': []}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': ''}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France'}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 6739}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 673915}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67391582}]}}}\n",
"{'event': 'on_parser_stream', 'name': 'my_parser', 'run_id': 'f2ac1d1c-e14a-45fc-8990-e5c24e707299', 'tags': ['seq:step:2'], 'metadata': {}, 'data': {'chunk': {'countries': [{'name': 'France', 'population': 67391582}, {}]}}}\n",
"...\n"
]
}
@@ -949,7 +977,7 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 18,
"id": "096cd904-72f0-4ebe-a8b7-d0e730faea7f",
"metadata": {},
"outputs": [
@@ -957,17 +985,17 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chat_model_start', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`')]]}}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='{\\n')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' ')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' \"')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='countries')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='\":')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' [\\n')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' ')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' {\\n')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '9ba1ef9f-5954-4649-b3da-1171b6abb000', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' ')}}\n",
"{'event': 'on_chat_model_start', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`')]]}}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' Here')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' is')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' the')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' JSON')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' with')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' the')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' requested')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' countries')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' and')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'model', 'run_id': '98a6e192-8159-460c-ba73-6dfc921e3777', 'tags': ['seq:step:1'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' their')}}\n",
"...\n"
]
}
@@ -1008,7 +1036,7 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 19,
"id": "26bac0d2-76d9-446e-b346-82790236b88d",
"metadata": {},
"outputs": [
@@ -1016,17 +1044,17 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chain_start', 'run_id': 'd4c78db8-be20-4fa0-87d6-cb317822967a', 'name': 'RunnableSequence', 'tags': ['my_chain'], 'metadata': {}, 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}}\n",
"{'event': 'on_chat_model_start', 'name': 'ChatOpenAI', 'run_id': '15e46d9f-ccf5-4da2-b9e3-b2a85873ba4c', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`')]]}}}\n",
"{'event': 'on_parser_start', 'name': 'JsonOutputParser', 'run_id': '91945f4f-0deb-4999-acf0-f6d191c89b34', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}, 'data': {}}\n",
"{'event': 'on_chat_model_stream', 'name': 'ChatOpenAI', 'run_id': '15e46d9f-ccf5-4da2-b9e3-b2a85873ba4c', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='')}}\n",
"{'event': 'on_parser_stream', 'name': 'JsonOutputParser', 'run_id': '91945f4f-0deb-4999-acf0-f6d191c89b34', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}, 'data': {'chunk': {}}}\n",
"{'event': 'on_chain_stream', 'run_id': 'd4c78db8-be20-4fa0-87d6-cb317822967a', 'tags': ['my_chain'], 'metadata': {}, 'name': 'RunnableSequence', 'data': {'chunk': {}}}\n",
"{'event': 'on_chat_model_stream', 'name': 'ChatOpenAI', 'run_id': '15e46d9f-ccf5-4da2-b9e3-b2a85873ba4c', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='{\"')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'ChatOpenAI', 'run_id': '15e46d9f-ccf5-4da2-b9e3-b2a85873ba4c', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='countries')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'ChatOpenAI', 'run_id': '15e46d9f-ccf5-4da2-b9e3-b2a85873ba4c', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content='\":')}}\n",
"{'event': 'on_parser_stream', 'name': 'JsonOutputParser', 'run_id': '91945f4f-0deb-4999-acf0-f6d191c89b34', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}, 'data': {'chunk': {'countries': []}}}\n",
"{'event': 'on_chain_stream', 'run_id': 'd4c78db8-be20-4fa0-87d6-cb317822967a', 'tags': ['my_chain'], 'metadata': {}, 'name': 'RunnableSequence', 'data': {'chunk': {'countries': []}}}\n",
"{'event': 'on_chain_start', 'run_id': '190875f3-3fb7-49ad-9b6e-f49da22f3e49', 'name': 'RunnableSequence', 'tags': ['my_chain'], 'metadata': {}, 'data': {'input': 'output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`'}}\n",
"{'event': 'on_chat_model_start', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'input': {'messages': [[HumanMessage(content='output a list of the countries france, spain and japan and their populations in JSON format. Use a dict with an outer key of \"countries\" which contains a list of countries. Each country should have the key `name` and `population`')]]}}}\n",
"{'event': 'on_parser_start', 'name': 'JsonOutputParser', 'run_id': '3b5e4ca1-40fe-4a02-9a19-ba2a43a6115c', 'tags': ['seq:step:2', 'my_chain'], 'metadata': {}, 'data': {}}\n",
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' Here')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' is')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' the')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' JSON')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' with')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' the')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' requested')}}\n",
"{'event': 'on_chat_model_stream', 'name': 'ChatAnthropic', 'run_id': 'ff58f732-b494-4ff9-852a-783d42f4455d', 'tags': ['seq:step:1', 'my_chain'], 'metadata': {}, 'data': {'chunk': AIMessageChunk(content=' countries')}}\n",
"...\n"
]
}
@@ -1062,7 +1090,7 @@
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 20,
"id": "0e6451d3-3b11-4a71-ae19-998f4c10180f",
"metadata": {},
"outputs": [],
@@ -1104,7 +1132,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 21,
"id": "f9a8fe35-faab-4970-b8c0-5c780845d98a",
"metadata": {},
"outputs": [
@@ -1112,7 +1140,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"\n"
"['France', 'Spain', 'Japan']\n"
]
}
],
@@ -1133,7 +1161,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 22,
"id": "b08215cd-bffa-4e76-aaf3-c52ee34f152c",
"metadata": {},
"outputs": [
@@ -1141,33 +1169,33 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Chat model chunk: ''\n",
"Chat model chunk: ' Here'\n",
"Chat model chunk: ' is'\n",
"Chat model chunk: ' the'\n",
"Chat model chunk: ' JSON'\n",
"Chat model chunk: ' with'\n",
"Chat model chunk: ' the'\n",
"Chat model chunk: ' requested'\n",
"Chat model chunk: ' countries'\n",
"Chat model chunk: ' and'\n",
"Chat model chunk: ' their'\n",
"Chat model chunk: ' populations'\n",
"Chat model chunk: ':'\n",
"Chat model chunk: '\\n\\n```'\n",
"Chat model chunk: 'json'\n",
"Parser chunk: {}\n",
"Chat model chunk: '{\"'\n",
"Chat model chunk: '\\n{'\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: ' \"'\n",
"Chat model chunk: 'countries'\n",
"Chat model chunk: '\":'\n",
"Parser chunk: {'countries': []}\n",
"Chat model chunk: ' [\\n'\n",
"Chat model chunk: ' '\n",
"Chat model chunk: ' ['\n",
"Chat model chunk: '\\n '\n",
"Parser chunk: {'countries': [{}]}\n",
"Chat model chunk: ' {\"'\n",
"Chat model chunk: 'name'\n",
"Chat model chunk: '\":'\n",
"Parser chunk: {'countries': [{'name': ''}]}\n",
"Chat model chunk: ' {'\n",
"Chat model chunk: '\\n '\n",
"Chat model chunk: ' \"'\n",
"Parser chunk: {'countries': [{'name': 'France'}]}\n",
"Chat model chunk: 'France'\n",
"Chat model chunk: '\",'\n",
"Chat model chunk: ' \"'\n",
"Chat model chunk: 'population'\n",
"Chat model chunk: '\":'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': ''}]}\n",
"Chat model chunk: ' \"'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': '67'}]}\n",
"Chat model chunk: '67'\n",
"Parser chunk: {'countries': [{'name': 'France', 'population': '67 million'}]}\n",
"Chat model chunk: ' million'\n",
"Chat model chunk: '\"},\\n'\n",
"...\n"
]
}
@@ -1212,7 +1240,7 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 23,
"id": "1854206d-b3a5-4f91-9e00-bccbaebac61f",
"metadata": {},
"outputs": [
@@ -1220,9 +1248,9 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_tool_start', 'run_id': '39e4a7eb-c13d-46f0-99e7-75c2fa4aa6a6', 'name': 'bad_tool', 'tags': [], 'metadata': {}, 'data': {'input': 'hello'}}\n",
"{'event': 'on_tool_stream', 'run_id': '39e4a7eb-c13d-46f0-99e7-75c2fa4aa6a6', 'tags': [], 'metadata': {}, 'name': 'bad_tool', 'data': {'chunk': 'olleh'}}\n",
"{'event': 'on_tool_end', 'name': 'bad_tool', 'run_id': '39e4a7eb-c13d-46f0-99e7-75c2fa4aa6a6', 'tags': [], 'metadata': {}, 'data': {'output': 'olleh'}}\n"
"{'event': 'on_tool_start', 'run_id': 'ae7690f8-ebc9-4886-9bbe-cb336ff274f2', 'name': 'bad_tool', 'tags': [], 'metadata': {}, 'data': {'input': 'hello'}}\n",
"{'event': 'on_tool_stream', 'run_id': 'ae7690f8-ebc9-4886-9bbe-cb336ff274f2', 'tags': [], 'metadata': {}, 'name': 'bad_tool', 'data': {'chunk': 'olleh'}}\n",
"{'event': 'on_tool_end', 'name': 'bad_tool', 'run_id': 'ae7690f8-ebc9-4886-9bbe-cb336ff274f2', 'tags': [], 'metadata': {}, 'data': {'output': 'olleh'}}\n"
]
}
],
@@ -1258,7 +1286,7 @@
},
{
"cell_type": "code",
"execution_count": 25,
"execution_count": 24,
"id": "a20a6cb3-bb43-465c-8cfc-0a7349d70968",
"metadata": {},
"outputs": [
@@ -1266,11 +1294,11 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_tool_start', 'run_id': '4263aca5-f221-4eb7-b07e-60a89fb76c5c', 'name': 'correct_tool', 'tags': [], 'metadata': {}, 'data': {'input': 'hello'}}\n",
"{'event': 'on_chain_start', 'name': 'reverse_word', 'run_id': '65e3679b-e238-47ce-a875-ee74480e696e', 'tags': [], 'metadata': {}, 'data': {'input': 'hello'}}\n",
"{'event': 'on_chain_end', 'name': 'reverse_word', 'run_id': '65e3679b-e238-47ce-a875-ee74480e696e', 'tags': [], 'metadata': {}, 'data': {'input': 'hello', 'output': 'olleh'}}\n",
"{'event': 'on_tool_stream', 'run_id': '4263aca5-f221-4eb7-b07e-60a89fb76c5c', 'tags': [], 'metadata': {}, 'name': 'correct_tool', 'data': {'chunk': 'olleh'}}\n",
"{'event': 'on_tool_end', 'name': 'correct_tool', 'run_id': '4263aca5-f221-4eb7-b07e-60a89fb76c5c', 'tags': [], 'metadata': {}, 'data': {'output': 'olleh'}}\n"
"{'event': 'on_tool_start', 'run_id': '384f1710-612e-4022-a6d4-8a7bb0cc757e', 'name': 'correct_tool', 'tags': [], 'metadata': {}, 'data': {'input': 'hello'}}\n",
"{'event': 'on_chain_start', 'name': 'reverse_word', 'run_id': 'c4882303-8867-4dff-b031-7d9499b39dda', 'tags': [], 'metadata': {}, 'data': {'input': 'hello'}}\n",
"{'event': 'on_chain_end', 'name': 'reverse_word', 'run_id': 'c4882303-8867-4dff-b031-7d9499b39dda', 'tags': [], 'metadata': {}, 'data': {'input': 'hello', 'output': 'olleh'}}\n",
"{'event': 'on_tool_stream', 'run_id': '384f1710-612e-4022-a6d4-8a7bb0cc757e', 'tags': [], 'metadata': {}, 'name': 'correct_tool', 'data': {'chunk': 'olleh'}}\n",
"{'event': 'on_tool_end', 'name': 'correct_tool', 'run_id': '384f1710-612e-4022-a6d4-8a7bb0cc757e', 'tags': [], 'metadata': {}, 'data': {'output': 'olleh'}}\n"
]
}
],
@@ -1295,7 +1323,7 @@
},
{
"cell_type": "code",
"execution_count": 26,
"execution_count": 25,
"id": "0ac0a3c1-f3a4-4157-b053-4fec8d2e698c",
"metadata": {},
"outputs": [
@@ -1303,11 +1331,11 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chain_start', 'run_id': '714d22d4-a3c3-45fc-b2f1-913aa7f0fc22', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
"{'event': 'on_chain_start', 'name': 'reverse_word', 'run_id': '35a6470c-db65-4fe1-8dff-4e3418601d2f', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
"{'event': 'on_chain_end', 'name': 'reverse_word', 'run_id': '35a6470c-db65-4fe1-8dff-4e3418601d2f', 'tags': [], 'metadata': {}, 'data': {'input': '1234', 'output': '4321'}}\n",
"{'event': 'on_chain_stream', 'run_id': '714d22d4-a3c3-45fc-b2f1-913aa7f0fc22', 'tags': [], 'metadata': {}, 'name': 'reverse_and_double', 'data': {'chunk': '43214321'}}\n",
"{'event': 'on_chain_end', 'name': 'reverse_and_double', 'run_id': '714d22d4-a3c3-45fc-b2f1-913aa7f0fc22', 'tags': [], 'metadata': {}, 'data': {'output': '43214321'}}\n"
"{'event': 'on_chain_start', 'run_id': '4fe56c7b-6982-4999-a42d-79ba56151176', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
"{'event': 'on_chain_start', 'name': 'reverse_word', 'run_id': '335fe781-8944-4464-8d2e-81f61d1f85f5', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
"{'event': 'on_chain_end', 'name': 'reverse_word', 'run_id': '335fe781-8944-4464-8d2e-81f61d1f85f5', 'tags': [], 'metadata': {}, 'data': {'input': '1234', 'output': '4321'}}\n",
"{'event': 'on_chain_stream', 'run_id': '4fe56c7b-6982-4999-a42d-79ba56151176', 'tags': [], 'metadata': {}, 'name': 'reverse_and_double', 'data': {'chunk': '43214321'}}\n",
"{'event': 'on_chain_end', 'name': 'reverse_and_double', 'run_id': '4fe56c7b-6982-4999-a42d-79ba56151176', 'tags': [], 'metadata': {}, 'data': {'output': '43214321'}}\n"
]
}
],
@@ -1337,7 +1365,7 @@
},
{
"cell_type": "code",
"execution_count": 27,
"execution_count": 26,
"id": "c896bb94-9d10-41ff-8fe2-d6b05b1ed74b",
"metadata": {},
"outputs": [
@@ -1345,11 +1373,11 @@
"name": "stdout",
"output_type": "stream",
"text": [
"{'event': 'on_chain_start', 'run_id': '17c89289-9c71-406d-90de-86f76b5e798b', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
"{'event': 'on_chain_start', 'name': 'reverse_word', 'run_id': 'b1105188-9196-43c1-9603-4f2f58e51de4', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
"{'event': 'on_chain_end', 'name': 'reverse_word', 'run_id': 'b1105188-9196-43c1-9603-4f2f58e51de4', 'tags': [], 'metadata': {}, 'data': {'input': '1234', 'output': '4321'}}\n",
"{'event': 'on_chain_stream', 'run_id': '17c89289-9c71-406d-90de-86f76b5e798b', 'tags': [], 'metadata': {}, 'name': 'reverse_and_double', 'data': {'chunk': '43214321'}}\n",
"{'event': 'on_chain_end', 'name': 'reverse_and_double', 'run_id': '17c89289-9c71-406d-90de-86f76b5e798b', 'tags': [], 'metadata': {}, 'data': {'output': '43214321'}}\n"
"{'event': 'on_chain_start', 'run_id': '7485eedb-1854-429c-a2f8-03d01452daef', 'name': 'reverse_and_double', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
"{'event': 'on_chain_start', 'name': 'reverse_word', 'run_id': 'e7cddab2-9b95-4e80-abaf-4b2429117835', 'tags': [], 'metadata': {}, 'data': {'input': '1234'}}\n",
"{'event': 'on_chain_end', 'name': 'reverse_word', 'run_id': 'e7cddab2-9b95-4e80-abaf-4b2429117835', 'tags': [], 'metadata': {}, 'data': {'input': '1234', 'output': '4321'}}\n",
"{'event': 'on_chain_stream', 'run_id': '7485eedb-1854-429c-a2f8-03d01452daef', 'tags': [], 'metadata': {}, 'name': 'reverse_and_double', 'data': {'chunk': '43214321'}}\n",
"{'event': 'on_chain_end', 'name': 'reverse_and_double', 'run_id': '7485eedb-1854-429c-a2f8-03d01452daef', 'tags': [], 'metadata': {}, 'data': {'output': '43214321'}}\n"
]
}
],
@@ -1385,7 +1413,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
"version": "3.9.6"
}
},
"nbformat": 4,

View File

@@ -40,26 +40,7 @@
"id": "b99b47ec",
"metadata": {},
"source": [
"%pip install --upgrade --quiet langchain-core langchain-openai"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "dbeac2b8-c441-4d8d-b313-1de0ab9c7e51",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"\n",
"prompt = ChatPromptTemplate.from_template(\"Tell me a short joke about {topic}\")\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
"output_parser = StrOutputParser()\n",
"\n",
"chain = prompt | model | output_parser"
"%pip install --upgrade --quiet langchain-core langchain-openai langchain-anthropic"
]
},
{
@@ -127,6 +108,9 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import ChatOpenAI\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"\n",
"\n",
@@ -476,7 +460,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_anthropic import ChatAnthropic\n",
"\n",
"anthropic = ChatAnthropic(model=\"claude-2\")\n",
"anthropic_chain = (\n",
@@ -1010,7 +994,7 @@
"source": [
"import os\n",
"\n",
"from langchain_community.chat_models import ChatAnthropic\n",
"from langchain_anthropic import ChatAnthropic\n",
"from langchain_openai import ChatOpenAI\n",
"from langchain_openai import OpenAI\n",
"from langchain_core.output_parsers import StrOutputParser\n",
@@ -1070,9 +1054,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "poetry-venv",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "poetry-venv"
"name": "python3"
},
"language_info": {
"codemirror_mode": {

View File

@@ -14,7 +14,16 @@ This framework consists of several parts.
- **[LangServe](/docs/langserve)**: A library for deploying LangChain chains as a REST API.
- **[LangSmith](/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](/svg/langchain_stack.svg "LangChain Framework Overview")
import ThemedImage from '@theme/ThemedImage';
<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: '/svg/langchain_stack.svg',
dark: '/svg/langchain_stack_dark.svg',
}}
title="LangChain Framework Overview"
/>
Together, these products simplify the entire application lifecycle:
- **Develop**: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference.
@@ -93,6 +102,3 @@ Head to the reference section for full documentation of all classes and methods
### [Developer's guide](/docs/contributing)
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
### [Community](/docs/community)
Head to the [Community navigator](/docs/community) to find places to ask questions, share feedback, meet other developers, and dream about the future of LLMs.

Some files were not shown because too many files have changed in this diff Show More