Compare commits

...

1352 Commits

Author SHA1 Message Date
Erick Friis
98376c22c1 x 2024-05-29 20:33:06 -07:00
Erick Friis
6f01d936f5 Merge branch 'master' into erick/docs-rewrite-contributor-docs 2024-05-29 11:15:24 -07:00
Erick Friis
600cdd9828 x 2024-05-29 09:31:05 -07:00
ccurme
af1f723ada openai: don't override stream_options default (#22242)
ChatOpenAI supports a kwarg `stream_options` which can take values
`{"include_usage": True}` and `{"include_usage": False}`.

Setting include_usage to True adds a message chunk to the end of the
stream with usage_metadata populated. In this case the final chunk no
longer includes `"finish_reason"` in the `response_metadata`. This is
the current default and is not yet released. Because this could be
disruptive to workflows, here we remove this default. The default will
now be consistent with OpenAI's API (see parameter
[here](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream_options)).

Examples:
```python
from langchain_openai import ChatOpenAI

llm = ChatOpenAI()

for chunk in llm.stream("hi"):
    print(chunk)
```
```
content='' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='Hello' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='!' id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
content='' response_metadata={'finish_reason': 'stop'} id='run-8cff4721-2acd-4551-9bf7-1911dae46b92'
```

```python
for chunk in llm.stream("hi", stream_options={"include_usage": True}):
    print(chunk)
```
```
content='' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='Hello' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='!' id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='' response_metadata={'finish_reason': 'stop'} id='run-39ab349b-f954-464d-af6e-72a0927daa27'
content='' id='run-39ab349b-f954-464d-af6e-72a0927daa27' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}
```

```python
llm = ChatOpenAI().bind(stream_options={"include_usage": True})

for chunk in llm.stream("hi"):
    print(chunk)
```
```
content='' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='Hello' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='!' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='' response_metadata={'finish_reason': 'stop'} id='run-59918845-04b2-41a6-8d90-f75fb4506e0d'
content='' id='run-59918845-04b2-41a6-8d90-f75fb4506e0d' usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17}
```
2024-05-29 10:30:40 -04:00
Karim Lalani
a1899439fc [experimental][llms][ollama_functions] Update OllamaFunctions to send tool_calls attribute (#21625)
Update OllamaFunctions to return `tool_calls` for AIMessages when used
for tool calling.
2024-05-29 09:38:33 -04:00
Erick Friis
e56e31161e x 2024-05-28 17:10:21 -07:00
Erick Friis
cf31a0a3f0 Merge branch 'master' into erick/docs-rewrite-contributor-docs 2024-05-28 16:18:13 -07:00
Bagatur
d61bdeba25 core[patch]: allow access RunnableWithFallbacks.runnable attrs (#22139)
RFC, candidate fix for #13095 #22134
2024-05-28 13:18:09 -07:00
SteveLiao
7496fe2b16 Update parent_document_retriever.py about **kwargs (#22219)
Add kwargs in add_documents function

**langchain**: Add **kwargs in parent_document_retriever"
 - **Add kwargs for `add_document` in `parent_document_retriever.py`** 


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-05-28 11:35:38 -07:00
Mark Cusack
8dfa3c5f1a Update/fix docs to list Yellowbrick as a supported indexed vectorstore (#22235)
Update/fix docs to list Yellowbrick as a supported indexed vectorstore
and fix the Jupyter notebook.
2024-05-28 11:34:49 -07:00
Erick Friis
93240fac68 milvus: fix core dep (#22239) 2024-05-28 10:21:37 -07:00
Erick Friis
611faa22c7 infra: allow first releases 2 (#22237) 2024-05-28 09:53:21 -07:00
Erick Friis
26c6e4a5ef infra: allow first releases (#22236) 2024-05-28 09:39:40 -07:00
ChengZi
404d92ded0 milvus: New langchain_milvus package and new milvus features (#21077)
New features:

- New langchain_milvus package in partner
- Milvus collection hybrid search retriever
- Zilliz cloud pipeline retriever
- Milvus Local guid
- Rag-milvus template

---------

Signed-off-by: ChengZi <chen.zhang@zilliz.com>
Signed-off-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Jael Gu <mengjia.gu@zilliz.com>
Co-authored-by: Jackson <jacksonxie612@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-05-28 08:24:20 -07:00
Leonid Ganeline
d7f70535ba docs: arxiv page, added cookbooks (#22215)
Issue: The `arXiv` page is missing the arxiv paper references from the
`langchain/cookbook`.
PR: Added the cookbook references.
Result: `Found 29 arXiv references in the 3 docs, 21 API Refs, 5
Templates, and 18 Cookbooks.` - much more references are visible now.
2024-05-27 15:47:02 -07:00
Leonid Ganeline
d6995e814b ai21[patch]: added license (#22153)
The `pyproject.toml` missed the `license` parameter. I've added it as
`MIT`
2024-05-27 15:14:14 -07:00
Maddy Adams
8332a36f69 infra: update langchainhub and add integration test (#22154)
**Description:** Update langchainhub integration test dependency and add
an integration test for pulling private prompt
**Dependencies:** langchainhub 0.1.16
2024-05-27 14:58:10 -07:00
Will Higgins
83d10df78d community[patch]: Update firecrawl api key name (#22183)
Change 'FIREWALL' to 'FIRECRAWL' as I believe this may have been in
error. Other docs refer to 'FIRECRAWL_API_KEY'.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:39:29 +00:00
hmasdev
bbd7015b5d core[patch]: Add TypeError handler into get_graph of Runnable (#19856)
# Description

## Problem

`Runnable.get_graph` fails when `InputType` or `OutputType` property
raises `TypeError`.

-
003c98e5b4/libs/core/langchain_core/runnables/base.py (L250-L274)
-
003c98e5b4/libs/core/langchain_core/runnables/base.py (L394-L396)

This problem prevents getting a graph of `Runnable` objects whose
`InputType` or `OutputType` property raises `TypeError` but whose
`invoke` works well, such as `langchain.output_parsers.RegexParser`,
which I have already pointed out in #19792 that a `TypeError` would
occur.

## Solution

- Add `try-except` syntax to handle `TypeError` to the codes which get
`input_node` and `output_node`.

# Issue
- #19801 

# Twitter Handle
- [hmdev3](https://twitter.com/hmdev3)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:34:34 +00:00
acho98
753353411f docs: Fix Clova embeddings example document (#22181)
- [ ] **PR title**: "Fix list handling in Clova embeddings example
documentation"
  - Description:
Fixes a bug in the Clova Embeddings example documentation where
document_text was incorrectly wrapped in an additional list.
   - Rationale
The embed_documents method expects a list, but the previous example
wrapped document_text in an unnecessary additional list, causing an
error. The updated example correctly passes document_text directly to
the method, ensuring it functions as intended.
2024-05-27 14:31:34 -07:00
Mohammad Mohtashim
577ed68b59 mistralai[patch]: Added Json Mode for ChatMistralAI (#22213)
- **Description:** Powered
[ChatMistralAI.with_structured_output](fbfed65fb1/libs/partners/mistralai/langchain_mistralai/chat_models.py (L609))
via json mode
 

-  **Issue:** #22081

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-27 21:16:52 +00:00
Pranith
25c270b5a5 docs : Added integrations for tools with langchain_community (#22188)
PR title: Docs enhancement

Description: Adding installation instructions for integrations requiring
langchain-community package since 0.2
Issue: https://github.com/langchain-ai/langchain/issues/22005
2024-05-27 14:06:40 -07:00
Ibrahim
cfea0e231a Update llm_chain.ipynb text (#22198)
Added the missing verb "is" and a comma to the text in the Prompt
Templates description within the Build a Simple LLM Application tutorial
for more clarity.
2024-05-27 19:57:41 +00:00
Aditya
bf81ecd3b4 docs:updated documentation for llama, falcon and gemma on Vertex AI Model garden (#22201)
- **Description:** updated documentation for llama, falcona and gemma on
Vertex AI Model garden
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA

@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-05-27 12:56:11 -07:00
Pavlo Paliychuk
342df7cf83 community[minor]: Add Zep Cloud components + docs + examples (#21671)
Thank you for contributing to LangChain!

- [x] **PR title**: community: Add Zep Cloud components + docs +
examples

- [x] **PR message**: 
We have recently released our new zep-cloud sdks that are compatible
with Zep Cloud (not Zep Open Source). We have also maintained our Cloud
version of langchain components (ChatMessageHistory, VectorStore) as
part of our sdks. This PRs goal is to port these components to langchain
community repo, and close the gap with the existing Zep Open Source
components already present in community repo (added
ZepCloudMemory,ZepCloudVectorStore,ZepCloudRetriever).
Also added a ZepCloudChatMessageHistory components together with an
expression language example ported from our repo. We have left the
original open source components intact on purpose as to not introduce
any breaking changes.
    - **Issue:** -
- **Dependencies:** Added optional dependency of our new cloud sdk
`zep-cloud`
    - **Twitter handle:** @paulpaliychuk51


- [x] **Add tests and docs**


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-27 12:50:13 -07:00
Jan Soubusta
cccc8fbe2f community[patch]: DuckDB VS - expose similarity, improve performance of from_texts (#20971)
3 fixes of DuckDB vector store:
- unify defaults in constructor and from_texts (users no longer have to
specify `vector_key`).
- include search similarity into output metadata (fixes #20969)
- significantly improve performance of `from_documents`

Dependencies: added Pandas to speed up `from_documents`.
I was thinking about CSV and JSON options, but I expect trouble loading
JSON values this way and also CSV and JSON options require storing data
to disk.
Anyway, the poetry file for langchain-community already contains a
dependency on Pandas.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-24 15:17:52 -07:00
Surya Pratap Singh Shekhawat
42207f5bef Update agent_executor.ipynb (#22104)
fixed typos in the doc.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-24 22:14:41 +00:00
Erick Friis
8acadc34f5 docs: edit links, direct for notebooks (#22051) 2024-05-24 19:44:46 +00:00
Erick Friis
42ffcb2ff1 anthropic: release 0.1.14rc2, test release note gen (#22147) 2024-05-24 12:40:10 -07:00
Erick Friis
6ee8de62c0 infra: auto-generated release notes based on git log (#22141)
Generates release notes based on a `git log` command with title names

Aiming to improve to splitting out features vs. bugfixes using
conventional commits in the coming weeks.

Will work for any monorepo packages
2024-05-24 11:43:28 -07:00
Ameya Shenoy
8ba492ed6a community[minor]: clickhouse -- ability to use secure connection (#22108)
- **Description:** this PR gives clickhouse client the ability to use a
secure connection to the clickhosue server
- **Issue:** fixes #22082
- **Dependencies:** -
- **Twitter handle:** `_codingcoffee_`

Signed-off-by: Ameya Shenoy <shenoy.ameya@gmail.com>
Co-authored-by: Shresth Rana <shresth@grapevine.in>
2024-05-24 17:30:22 +00:00
ccurme
9a010fb761 openai: read stream_options (#21548)
OpenAI recently added a `stream_options` parameter to its chat
completions API (see [release
notes](https://platform.openai.com/docs/changelog/added-chat-completions-stream-usage)).
When this parameter is set to `{"usage": True}`, an extra "empty"
message is added to the end of a stream containing token usage. Here we
propagate token usage to `AIMessage.usage_metadata`.

We enable this feature by default. Streams would now include an extra
chunk at the end, **after** the chunk with
`response_metadata={'finish_reason': 'stop'}`.

New behavior:
```
[AIMessageChunk(content='', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='Hello', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='!', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='', response_metadata={'finish_reason': 'stop'}, id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde'),
 AIMessageChunk(content='', id='run-4b20dbe0-3817-4f62-b89d-03ef76f25bde', usage_metadata={'input_tokens': 8, 'output_tokens': 9, 'total_tokens': 17})]
```

Old behavior (accessible by passing `stream_options={"include_usage":
False}` into (a)stream:
```
[AIMessageChunk(content='', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='Hello', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='!', id='run-1312b971-c5ea-4d92-9015-e6604535f339'),
 AIMessageChunk(content='', response_metadata={'finish_reason': 'stop'}, id='run-1312b971-c5ea-4d92-9015-e6604535f339')]
```

From what I can tell this is not yet implemented in Azure, so we enable
only for ChatOpenAI.
2024-05-24 13:20:56 -04:00
Patrick Zhang
eb7c767e5b docs: update the name of the tool passio_nutrition_ai (#22116)
Updating the name of the Passion Nutrition AI tool so that the name of
the tool is correctly displayed in the sidebar menu.

Currently the name of the tool says "Quickstart" in the side bar.
The patch fixed the name to be Passio Nutrition AI.

<img width="681" alt="image"
src="https://github.com/langchain-ai/langchain/assets/4603110/9609975e-78ea-4032-9024-10c4f838170a">
2024-05-24 17:15:16 +00:00
Leonid Ganeline
fd4ee08167 docs: integrations/platforms/microsoft update (#22100)
Added the `Azure Container Apps dynamic sessions` tool reference
2024-05-24 13:14:51 -04:00
Rahul Triptahi
1a485f59b9 community[patch]: Put authorized identities behind a feature flag in SharepointLoader (#22125)
Description: Put authorised identities behind a feature flag, load_auth.
Documentation: N/A
Unit tests: N/A

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-05-24 12:42:57 -04:00
Anindyadeep
ee689412ab docs: Update PremAI Docs (#22114)
Thank you for contributing to LangChain!

- [X] **PR title**: community: Updated langchain-community PremAI
documentation

- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-05-24 11:55:32 -04:00
sasha
1c9ceff503 community: add metadata to chain logging; (#22122)
Hey, I'm Sasha. The SDK engineer from [Comet](https://comet.com).
This PR updates the CometTracer class.
Added metadata to CometTracerr. From now on, both chains and spans will
send it.
2024-05-24 15:29:40 +00:00
Jirka Lhotka
7c0459faf2 community: Update costs of openai finetuned models (#22124)
- **Description:** Update costs of finetuned models and add
gpt-3-turbo-0125. Source: https://openai.com/api/pricing/
  - **Issue:** N/A
  - **Dependencies:** None
2024-05-24 15:25:17 +00:00
Eugene Yurtsev
d3db83abe3 community[major]: lint for usage of xml library (#22132)
* Lint for usage of standard xml library
* Add forced opt-in for quip client
* Actual security issue is with underlying QuipClient not LangChain
integration (since the client is doing the parsing), but adding
enforcement at the LangChain level.
2024-05-24 15:23:53 +00:00
Tom Aarsen
5b5ea2af30 docs: Add explanation on how to use Hugging Face embeddings (#22118)
- **Description:** I've added a tab on embedding text with LangChain
using Hugging Face models to here:
https://python.langchain.com/v0.2/docs/how_to/embed_text/. HF was
mentioned in the running text, but not in the tabs, which I thought was
odd.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** No need, this is tiny :) 

Also, I had a ton of issues with the poetry docs/lint install, so I
haven't linted this. Apologies for that.

cc @Jofthomas 

- Tom Aarsen
2024-05-24 11:21:03 -04:00
Bagatur
baa3c975cb anthropic[patch]: allow tool call mutation (#22130)
If tool_use blocks and tool_calls with overlapping IDs are present,
prefer the values of the tool_calls. Allows for mutating AIMessages just
via tool_calls.
2024-05-24 08:18:14 -07:00
Christophe Bornet
c838de5027 doc: Add doc for CassandraByteStore (#22126)
Preview:
https://langchain-git-fork-cbornet-doc-cassandrabytestore-langchain.vercel.app/v0.2/docs/integrations/stores/cassandra/
2024-05-24 10:57:55 -04:00
Vadym Barda
2edb512282 docs: improve how-to docs for message history (#22072)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-23 20:12:24 -04:00
Artem
eb7c453b98 docs: update hub.pull("rlm/map-prompt") to hub.pull("rlm/reduce-prompt") for reduce prompt (#22088)
**PR message**: 
Update `hub.pull("rlm/map-prompt")` to `hub.pull("rlm/reduce-prompt")`
in summarization.ipynb

**Description:** 
Fix typo in prompt hub link from `reduce_prompt =
hub.pull("rlm/map-prompt")` to `reduce_prompt =
hub.pull("rlm/reduce-prompt")` following next issue

**Issue:** #22014

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-23 23:07:37 +00:00
Leonid Ganeline
2416737c5f docs: compact the API Reference links (#21285)
This PR is opinionated. 
Issue: the `API Reference` sections in the examples hold too much
vertical space and make us scroll the page too much. See an
[example](https://python.langchain.com/docs/get_started/quickstart/#conversation-retrieval-chain).
These sections are **important**. So, the compacting should not make
these sections less noticeable.
Change: compacting the `API Reference` sections. See the [same example
after change
applied](https://langchain-j6nya46lf-langchain.vercel.app/docs/get_started/quickstart/#conversation-retrieval-chain).
It is more compact and now looks like references (footnotes).
Note: I would also change the section style, so it would be more
noticeable (maybe to look like the footnotes. Smaller wider font?)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 15:50:23 -07:00
ccurme
0ea1e89b2c groq: read tool calls from .tool_calls attribute (#22096) 2024-05-23 18:16:06 -04:00
Bagatur
96c21dfe56 docs: hf feat table tool calling (#22091) 2024-05-23 15:09:30 -07:00
Eugene Yurtsev
63004a0945 codespell ignore remaining issues (#22097) 2024-05-23 21:51:39 +00:00
Eugene Yurtsev
2d693c484e docs: fix some spelling mistakes caught by newest version of code spell (#22090)
Going to merge this even though it doesn't pass all tests, and open a
separate PR for the remaining spelling mistakes.
2024-05-23 16:59:11 -04:00
Bagatur
38783d07c9 infra: api docs quick preview (#22093) 2024-05-23 13:57:45 -07:00
Pavel Zloi
fe26f937e4 community[minor]: ManticoreSearch engine added to vectorstore (#19117)
**Description:** ManticoreSearch engine added to vectorstores
**Issue:** no issue, just a new feature
**Dependencies:** https://pypi.org/project/manticoresearch-dev/
**Twitter handle:** @EvilFreelancer

- Example notebook with test integration:

https://github.com/EvilFreelancer/langchain/blob/manticore-search-vectorstore/docs/docs/integrations/vectorstores/manticore_search.ipynb

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 13:56:18 -07:00
Erick Friis
95c3e5f85f cli: model name substitution fix, release 0.0.23 (#22089) 2024-05-23 13:09:38 -07:00
Kartheek Yakkala
18b8c8628a docs : Added integrations for tools with langchain_community (#22056)
- **PR title**:  Docs enhancement

- **Description:** Adding installation instructions for integrations
requiring `langchain-community` package since 0.2
    - **Issue:** https://github.com/langchain-ai/langchain/issues/22005
2024-05-23 15:09:34 -04:00
ccurme
152c8cac33 anthropic, openai: cut pre-releases (#22083) 2024-05-23 15:02:23 -04:00
ccurme
cd07521170 core: bump to 0.2.1rc (#22080) 2024-05-23 18:36:50 +00:00
Harrison Chase
170cc8aec3 docs: add multi-modal-docs (#21734)
We dont really have any abstractions around multi-modal... so add a
section explaining we dont have any abstrations and then how to guides
for openai and anthropic (probably need to add for more)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: junefish <junefish@users.noreply.github.com>
Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-23 18:33:25 +00:00
ccurme
fbfed65fb1 core, partners: add token usage attribute to AIMessage (#21944)
```python
class UsageMetadata(TypedDict):
    """Usage metadata for a message, such as token counts.

    Attributes:
        input_tokens: (int) count of input (or prompt) tokens
        output_tokens: (int) count of output (or completion) tokens
        total_tokens: (int) total token count
    """

    input_tokens: int
    output_tokens: int
    total_tokens: int
```
```python
class AIMessage(BaseMessage):
    ...
    usage_metadata: Optional[UsageMetadata] = None
    """If provided, token usage information associated with the message."""
    ...
```
2024-05-23 14:21:58 -04:00
Bagatur
3d26807b92 community[patch]: Release. 0.2.1 (#22073) 2024-05-23 10:40:32 -07:00
Bagatur
2d968213d7 langchain[patch]: Release 0.2.1 (#22074) 2024-05-23 10:09:36 -07:00
maang-h
9aba9e3e33 community[patch]: Update the default “API URL” and “MODEL” of sparkllm (#22070)
- **Description:** When I was running the sparkllm, I found that the
default parameters currently used could no longer run correctly.
    - original parameters & values:
         - spark_api_url: "wss://spark-api.xf-yun.com/v3.1/chat"
         - spark_llm_domain: "generalv3"
    ```python
    # example
    
    from langchain_community.chat_models import ChatSparkLLM
    
spark = ChatSparkLLM(spark_app_id="my_app_id",
spark_api_key="my_api_key", spark_api_secret="my_api_secret")
    spark.invoke("hello")
    ```

![sparkllm](https://github.com/langchain-ai/langchain/assets/55082429/5369bfdf-4305-496a-bcf5-2d3f59d39414)

So I updated them to 3.5 (same as sparkllm official website). After the
update, they can be used normally.
    - new parameters & values:
         - spark_api_url: "wss://spark-api.xf-yun.com/v3.5/chat"
         - spark_llm_domain: "generalv3.5"
2024-05-23 12:25:20 -04:00
junkeon
4fda7bf4f2 upstage[patch] : fix error handling in Layout Analysis parser (#22054)
This pull request addresses and fixes exception handling in the
UpstageLayoutAnalysisParser and enhances the test coverage by adding
error exception tests for the document loader. These improvements ensure
robust error handling and increase the reliability of the system when
dealing with external API calls and JSON responses.

### Changes Made
1. Fix Request Exception Handling:

- Issue: The existing implementation of UpstageLayoutAnalysisParser did
not properly handle exceptions thrown by the requests library, which
could lead to unhandled exceptions and potential crashes.
- Solution: Added comprehensive exception handling for
requests.RequestException to catch any request-related errors. This
includes logging the error details and raising a ValueError with a
meaningful error message.

2. Add Error Exception Tests for Document Loader:

- New Tests: Introduced new test cases to verify the robustness of the
UpstageLayoutAnalysisLoader against various error scenarios. The tests
ensure that the loader gracefully handles:
- RequestException: Simulates network issues or invalid API requests to
ensure appropriate error handling and user feedback.
- JSONDecodeError: Simulates scenarios where the API response is not a
valid JSON, ensuring the system does not crash and provides clear error
messaging.
2024-05-23 11:45:34 -04:00
JuHyung Son
d9eff44400 partner-upstage[patch]: embeddings empty list bug (#22057)
Fixed an error in `embed_documents` when the input was given as an empty
list. And I have revised the document.
2024-05-23 11:44:30 -04:00
Martin Triska
2df8ac402a community[minor]: Added propagation of document metadata from O365BaseLoader (#20663)
**Description:**
- Added propagation of document metadata from O365BaseLoader to
FileSystemBlobLoader (O365BaseLoader uses FileSystemBlobLoader under the
hood).
- This is done by passing dictionary `metadata_dict`: key=filename and
value=dictionary containing document's metadata
- Modified `FileSystemBlobLoader` to accept the `metadata_dict`, use
`mimetype` from it (if available) and pass metadata further into blob
loader.

**Issue:**
- `O365BaseLoader` under the hood downloads documents to temp folder and
then uses `FileSystemBlobLoader` on it.
- However metadata about the document in question is lost in this
process. In particular:
- `mime_type`: `FileSystemBlobLoader` guesses `mime_type` from the file
extension, but that does not work 100% of the time.
- `web_url`: this is useful to keep around since in RAG LLM we might
want to provide link to the source document. In order to work well with
document parsers, we pass the `web_url` as `source` (`web_url` is
ignored by parsers, `source` is preserved)

**Dependencies:**
None

**Twitter handle:**
@martintriska1

Please review @baskaryan

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-05-23 11:42:19 -04:00
Eugene Yurtsev
e5541d1da7 community[patch]: Update doc-string in CloudBlobLoader (#22069)
Update doc-string
2024-05-23 15:31:41 +00:00
Maxime Perrin
8ba4f77734 docs : Adding correct imports to the integrations callbacks doc (#22059)
- **Description:** Adding correct imports to the integrations callbacks
doc (langchain-community package)
  - **Issue:** #22005

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
2024-05-23 11:27:36 -04:00
Philippe PRADOS
6dd621d636 community[minor]: Add CloudBlobLoader that supports loading data from cloud buckets (#21957)
Thank you for contributing to LangChain!

- [ ] **PR title**: "Add CloudBlobLoader"
  - community: Add CloudBlobLoader

- [ ] **PR message**: Add cloud blob loader
    - **Description:** 
 Langchain provides several approaches to read different file formats:

Specific loaders (`CVSLoader`) or blob-compatible loaders
(`FileSystemBlobLoader`). The only implementation proposed for
BlobLoader is `FileSystemBlobLoader`.
      
Many projects retrieve files from cloud storage. We propose a new
implementation of `BlobLoader` to read files from the three cloud
storage systems. The interface is strictly identical to
`FileSystemBlobLoader`. The only difference is the constructor, which
takes a cloud "url" object such as `s3://my-bucket`, `az://my-bucket`,
or `gs://my-bucket`.
      
By streamlining the process, this novel implementation eliminates the
requirement to pre-download files from cloud storage to local temporary
files (which are seldom removed).
      
The code relies on the
[CloudPathLib](https://cloudpathlib.drivendata.org/stable/) library to
interpret cloud URLs. This has been added as an optional dependency.

```Python
loader = CloudBlobLoader("s3://mybucket/id")
for blob in loader.yield_blobs():
    print(blob)
```

- [X] **Dependencies:** CloudPathLib
- [X] **Twitter handle:** pprados


- [X] **Add tests and docs**: Add unit test, but it's easy to convert to
integration test, with some files in a cloud storage (see
`test_cloud_blob_loader.py`)

- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.

Hello from Paris @hwchase17. Can you review this PR?

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-05-23 10:59:55 -04:00
Christophe Bornet
74947ec894 community[minor]: Add Cassandra ByteStore (#22064) 2024-05-23 10:46:23 -04:00
Christophe Bornet
fea6b99b16 community[minor]: Add async methods to CassandraChatMessageHistory (#21975) 2024-05-23 10:13:05 -04:00
Eugene Yurtsev
37cfc00310 docs: concepts callbacks fix admonition (#22048)
Correct the admonition text
2024-05-22 20:33:28 -04:00
Erick Friis
53293dace8 docs: version increases (#22050) 2024-05-22 16:20:10 -07:00
Sky
12d65f17ff community[patch]: surrealdb provide functions for MMR (Maximal Marginal Relevance) (#21185)
This PR contains 4 added functions:

- max_marginal_relevance_search_by_vector
- amax_marginal_relevance_search_by_vector
- max_marginal_relevance_search
- amax_marginal_relevance_search

I'm no langchain expert, but tried do inspect other vectorstore sources
like chroma, to build these functions for SurrealDB. If someone has some
changes for me, please let me know. Otherwise I would be happy, if these
changes are added to the repository, so that I can use the orignal repo
and not my local monkey patched version.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 22:53:55 +00:00
Erick Friis
58b6c72375 docs: add astream v2 migration guide links (#21845)
- docs: v0.2 version sidebar
- x
- x
2024-05-22 15:48:42 -07:00
Bruno Alvisio
5eabe90494 community[patch]: Adding HEADER to the list of supported locations (#21946)
**Description:** adds headers to the list of supported locations when
generating the openai function schema
2024-05-22 22:47:56 +00:00
Bagatur
50186da0a1 infra: rm unused # noqa violations (#22049)
Updating #21137
2024-05-22 15:21:08 -07:00
acho98
45ed5f3f51 community[minor]: Add Clova Embeddings for LangChain Community (#21890)
- [ ] **PR title**: "Add Naver ClovaX embedding to LangChain community"
- HyperClovaX is a large language model developed by
[Naver](https://clova-x.naver.com/welcome).
It's a powerful and purpose-trained LLM.

- You can visit the embedding service provided by
[ClovaX](https://www.ncloud.com/product/aiService/clovaStudio)

- You may get CLOVA_EMB_API_KEY, CLOVA_EMB_APIGW_API_KEY,
CLOVA_EMB_APP_ID From
https://www.ncloud.com/product/aiService/clovaStudio

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 22:08:47 +00:00
arpitkumar980
444c2a3d9f community[patch]: sharepoint loader identity enabled (#21176)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:https://github.com/arpitkumar980/langchain.git
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-22 22:08:31 +00:00
Eugene Yurtsev
8a877120c3 docs: add admonitions to how-to callbacks (#22046)
Add admonitions with more information.
2024-05-22 22:05:57 +00:00
HuiyuanYan
bf3aefce93 community[patch]: Update tongyi.py to support MultimodalConversation in dashscope. (#21249)
Add the support of multimodal conversation in dashscope,now we can use
multimodal language model "qwen-vl-v1", "qwen-vl-chat-v1",
"qwen-audio-turbo" to processing picture an audio. :)

- [ ] **PR title**: "community: add multimodal conversation support in
dashscope"



- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** add multimodal conversation support in dashscope
    - **Issue:** 
    - **Dependencies:** dashscope≥1.18.0
    - **Twitter handle:** none :)


- [ ] **How to use it?**:
   - ```python
     Tongyi_chat = ChatTongyi(
        top_p=0.5,
        dashscope_api_key=api_key,
        model="qwen-vl-v1"
     )
     response= Tongyi_chat.invoke(
        input = 
        [
        {
            "role": "user",
            "content": [
{"image":
"https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"},
                {"text": "这是什么?"}
            ]
        }
        ]
       )
      ```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 22:04:58 +00:00
mochi
63284ffebf experimental[patch], docs: refine notebook for MyScale SelfQueryRetriever (#22016)
- **Description:** upgrade model to `gpt-4o`
2024-05-22 21:49:01 +00:00
MSubik
d948783a4c community[patch]: standardize init args, update for javelin sdk release. (#21980)
Related to
[20085](https://github.com/langchain-ai/langchain/issues/20085) Updated
the Javelin chat model to standardize the initialization argument. Also
fixed an existing bug, where code was initialized with incorrect call to
the JavelinClient defined in the javelin_sdk, resulting in an
initialization error. See related [Javelin
Documentation](https://docs.getjavelin.io/docs/javelin-python/quickstart).
2024-05-22 21:47:28 +00:00
Mohammad Mohtashim
16617dd239 community[patch]: AzureSearchVectorStoreRetriever Fixed to account for search_kwargs (#21572)
- **Description:** Fixed `AzureSearchVectorStoreRetriever` to account
for search_kwargs. More explanation is in the mentioned issue.
- **Issue:** #21492

---------

Co-authored-by: MAC <mac@MACs-MacBook-Pro.local>
Co-authored-by: Massimiliano Pronesti <massimiliano.pronesti@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 14:46:41 -07:00
Klaudia Lemiec
45351d1bc6 docs: Chroma docstrings update (#22001)
Thank you for contributing to LangChain!

- [X] **PR title**: "docs: Chroma docstrings update"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [X] **PR message**: 
    - **Description:** Added and updated Chroma docstrings
    - **Issue:** https://github.com/langchain-ai/langchain/issues/21983


- [X] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
  - only docs


- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-05-22 21:45:30 +00:00
Jerron Lim
28456c2c33 community[patch]: add args_schema to WikipediaQueryRun (#22019)
Description: This change adds args_schema (pydantic BaseModel) to
WikipediaQueryRun for correct schema formatting on LLM function calls

Issue: currently using WikipediaQueryRun with OpenAI function calling
returns the following error "TypeError: WikipediaQueryRun._run() got an
unexpected keyword argument '__arg1' ". This happens because the schema
sent to the LLM is "input: '{"__arg1":"Hunter x Hunter"}'" while the
method should be called with the "query" parameter.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 21:31:58 +00:00
Mazen Ramadan
3c1d77dd64 community[minor]: Add Scrapfly Loader community integration (#22036)
Added [Scrapfly](https://scrapfly.io/) Web Loader integration. Scrapfly
is a web scraping API that allows extracting web page data into
accessible markdown or text datasets.

- __Description__: Added Scrapfly web loader for retrieving web page
data as markdown or text.
- Dependencies: scrapfly-sdk
- Twitter: @thealchemi1st

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 21:29:13 +00:00
Chad Juliano
9a66c43146 docs: Use Kinetica Sql context API (#21993)
Update python notebook to use new Kinetica SQL context API.
2024-05-22 14:26:20 -07:00
ccurme
b51a1eba4d langchain, community: move OpenAIAssistantV2Runnable to community (#22044) 2024-05-22 21:22:50 +00:00
Mirna Wong
b4d5f3181b docs: updates code examples in neo4j_cypher.ipynb (#21973)
Resolves #19134

Thank you for contributing to LangChain!

- [x ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** this pr replaces `title` with `name` in the [add
examples in cypher generation
prompt](https://python.langchain.com/v0.1/docs/integrations/graphs/neo4j_cypher/#add-examples-in-the-cypher-generation-prompt)
section.
    - **Issue:** 19134
    - **Dependencies:** any dependencies required for this change
    - **Twitter handle:** @mirna_wong
2024-05-22 20:48:09 +00:00
CaroFG
6b98140b38 community[patch]: update for compatibility with Meilisearch v1.8 (#21979)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Updates Meilisearch vectorstore for compatibility
with v1.8. Adds [”showRankingScore”:
true”](https://www.meilisearch.com/docs/reference/api/search#ranking-score)
in the search parameters and replaces `_semanticScore` field with `
_rankingScore`


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-05-22 13:37:01 -07:00
Oleksii Pokotylo
98c0b093bb community[patch]: Extend AzureSearch with maximal_marginal_relevance, from_embeddings (#21065)
**Description:**
- Extend AzureSearch with `maximal_marginal_relevance` (for vector and
hybrid search)
- Add construction `from_embeddings` - if the user has already embedded
the texts
- Add `add_embeddings` 
- Refactor common parts (`_simple_search`, `_results_to_documents`,
`_reorder_results_with_maximal_marginal_relevance`)
- Add `vector_search_dimensions` as a parameter to the constructor to
avoid extra calls to `embed_query` (most of the time the user applies
the same model and knows the dimension)

**Issue:** none
**Dependencies:** none

- [x] **Add tests and docs**: The docstrings have been added to the new
functions, and unified for the existing ones. The example notebook is
great in illustrating the main usage of AzureSearch, adding the new
methods would only dilute the main content.
- [x] **Lint and test**

---------

Co-authored-by: Oleksii Pokotylo <oleksii.pokotylo@pwc.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 13:36:06 -07:00
Erick Friis
ed5914ff61 docs: move feedback into paginator from content (#22041)
we only index what's in the `<article>` tags for search. We should not
have the feedback in the article.
2024-05-22 13:21:27 -07:00
SaschaStoll
709664a079 community[patch]: Performant filter columns option for Hanavector (#21971)
**Description:** Backwards compatible extension of the initialisation
interface of HanaDB to allow the user to specify
specific_metadata_columns that are used for metadata storage of selected
keys which yields increased filter performance. Any not-mentioned
metadata remains in the general metadata column as part of a JSON
string. Furthermore switched to executemany for batch inserts into
HanaDB.

**Issue:** N/A

**Dependencies:** no new dependencies added

**Twitter handle:** @sapopensource

---------

Co-authored-by: Martin Kolb <martin.kolb@sap.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-22 13:21:21 -07:00
Bagatur
16b55b0704 langchain[patch]: remove dataclasses-json dep (#22042)
vestigial dep afaict
2024-05-22 13:20:57 -07:00
Christos Boulmpasakos
c3bcfad66d text-splitters[patch]: Extend TextSplitter:keep_separator functionality (#21130)
**Description:** Added extra functionality to `CharacterTextSplitter`,
`TextSplitter` classes.
The user can select whether to append the separator to the previous
chunk with `keep_separator='end' ` or else prepend to the next chunk.
Previous functionality prepended by default to next chunk.
  
**Issue:** Fixes #20908

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-22 13:17:45 -07:00
Bagatur
b859765752 docs: fix partner api ref build (#22007) 2024-05-22 13:16:07 -07:00
Eric Zhang
e7e41eaabe langchain: add RankLLM Reranker (#21171)
Integrate RankLLM reranker (https://github.com/castorini/rank_llm) into
LangChain

An example notebook is given in
`docs/docs/integrations/retrievers/rankllm-reranker.ipynb`

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-05-22 20:12:55 +00:00
Erick Friis
5491993f8a x 2024-05-22 13:01:17 -07:00
Eugene Yurtsev
14a9c7c44e concepts: update callback concepts (#22040)
Update callback concepts
2024-05-22 15:58:02 -04:00
maang-h
fc93bed8c4 community: Fix CSVLoader columns is None (#20701)
- **Bug code**: In
langchain_community/document_loaders/csv_loader.py:100

- **Description**: currently, when 'CSVLoader' reads the column as None
in the 'csv' file, it will report an error because the 'CSVLoader' does
not verify whether the column is of str type and does not consider how
to handle the corresponding 'row_data' when the column is' None 'in the
csv. This pr provides a solution.

- **Issue:**  Fix #20699 

- **thinking:**

1. Refer to the processing method for
'langchain_community/document_loaders/csv_loader.py:100' when **'v'**
equals'None', and apply the same method to '**k**'.
(Reference`csv.DictReader` ,**'k'** will only be None when `
len(columns) < len(number_row_data)` is established)
2. **‘k’** equals None only holds when it is the last column, and its
corresponding **'v'** type is a list. Therefore, I referred to the data
format in 'Document' and used ',' to concatenated the elements in the
list.(But I'm not sure if you accept this form, if you have any other
ideas, communicate)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-22 12:57:46 -07:00
Nithin James Padayatti
403142eaba langchain: added revision_example prompt template (#20916)
**Description:** Added revision_example prompt template to include the
revision request and revision examples in the revision chain.
    **Issue:** Not Applicable
    **Dependencies:** Not Applicable
    **Twitter handle:**  @nithinjp09
2024-05-22 19:57:32 +00:00
Sihan Chen
1f81277b9b community[minor]: allow enabling proxy in aiohttp session in AsyncHTML (#19499)
Allow enabling proxy in aiohttp session async html
2024-05-22 18:25:06 +00:00
Eugene Yurtsev
36813d2f00 community[patch]: Fix remaining __inits__ in community (#22037)
Fixes the __init__ files in community to use __all__ which is statically
defined.
2024-05-22 17:42:17 +00:00
Eugene Yurtsev
b7d08bf764 docs: update doc feedback to populate URL (#22033)
Update docfeedback to populate URL
2024-05-22 13:38:11 -04:00
Eugene Yurtsev
58360a1e53 community[patch]: Add unit test to verify that init is correctly defined (#22030)
Fix some __init__ files and add a unit test
2024-05-22 17:19:00 +00:00
Erick Friis
ef53ccf54b robocorp: release 0.0.8 (#22034) 2024-05-22 16:41:41 +00:00
Eugene Yurtsev
4633b4cf2b ci: update documentation template to include URL (#22032)
update documentation template to include URL
2024-05-22 12:01:28 -04:00
Matthew Hoffman
4f2e3bd7fd community[patch]: fix public interface for embeddings module (#21650)
## Description

The existing public interface for `langchain_community.emeddings` is
broken. In this file, `__all__` is statically defined, but is
subsequently overwritten with a dynamic expression, which type checkers
like pyright do not support. pyright actually gives the following
diagnostic on the line I am requesting we remove:


[reportUnsupportedDunderAll](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportUnsupportedDunderAll):

```
Operation on "__all__" is not supported, so exported symbol list may be incorrect
```

Currently, I get the following errors when attempting to use publicablly
exported classes in `langchain_community.emeddings`:

```python
import langchain_community.embeddings

langchain_community.embeddings.HuggingFaceEmbeddings(...)  #  error: "HuggingFaceEmbeddings" is not exported from module "langchain_community.embeddings" (reportPrivateImportUsage)
```

This is solved easily by removing the dynamic expression.
2024-05-22 11:42:15 -04:00
Maxime Perrin
6548052f9e docs : Integrations vector stores with langchain-community install (#22028)
- **Description:** Adding installation instruction for integrations
requiring `langchain-community` package since 0.2
  - **Issue:** #22005

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
2024-05-22 15:32:01 +00:00
Eugene Yurtsev
8d82160a8a community[patch]: Clean up logic in import checking unit test (#22026)
Clean up unit test
2024-05-22 15:30:10 +00:00
Tomaz Bratanic
d8a1f1114d community[patch]: Handle exceptions where node props aren't consistent in neo4j schema (#22027) 2024-05-22 11:21:56 -04:00
WeichenXu
b0ef5e778a community[patch]: Fix ChatDatabricsk in case that streaming response doesn't have role field in delta chunk (#21897)
Thank you for contributing to LangChain!

- [X] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


**Description:**
Fix ChatDatabricsk in case that streaming response doesn't have role
field in delta chunk


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
2024-05-22 08:12:53 -07:00
Eugene Yurtsev
aed64daabb community[patch]: Add unit test to catch bad __all__ definitions (#21996)
This will catch all dynamic __all__ definitions.
2024-05-22 09:32:13 -04:00
Brian Thorne
25ba733218 docs: Update import in wikipedia tool documentation (#21565)
Updates docs so the example doesn't lead to a warning:
```
LangChainDeprecationWarning: Importing tools from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

`from langchain_community.tools import WikipediaQueryRun`.

To install langchain-community run `pip install -U langchain-community`.
```
2024-05-21 17:20:51 -07:00
Bagatur
3b0437c05b core[patch]: Release 0.2.1 (#22003) 2024-05-22 00:05:04 +00:00
Kefan You
24b5c27bb1 community[patch]: raise_for_status logic missing in async _fetch of WebBaseLoader (#21948)
## 'raise_for_status' parameter of WebBaseLoader works in sync load but
not in async load.
In webBaseLoader:  

Sync load is calling `_scrape` and has `raise_for_status` properly
handled.
```
    def _scrape(
        self,
        url: str,
        parser: Union[str, None] = None,
        bs_kwargs: Optional[dict] = None,
    ) -> Any:
        from bs4 import BeautifulSoup

        if parser is None:
            if url.endswith(".xml"):
                parser = "xml"
            else:
                parser = self.default_parser

        self._check_parser(parser)

        html_doc = self.session.get(url, **self.requests_kwargs)
        if self.raise_for_status:
            html_doc.raise_for_status()

        if self.encoding is not None:
            html_doc.encoding = self.encoding
        elif self.autoset_encoding:
            html_doc.encoding = html_doc.apparent_encoding
        return BeautifulSoup(html_doc.text, parser, **(bs_kwargs or {}))
```
Async load is calling `_fetch` but missing `raise_for_status` logic.
```
    async def _fetch(
        self, url: str, retries: int = 3, cooldown: int = 2, backoff: float = 1.5
    ) -> str:
        async with aiohttp.ClientSession() as session:
            for i in range(retries):
                try:
                    async with session.get(
                        url,
                        headers=self.session.headers,
                        ssl=None if self.session.verify else False,
                        cookies=self.session.cookies.get_dict(),
                    ) as response:
                        return await response.text()
```

Co-authored-by: kefan.you <darkfss@sina.com>
2024-05-21 23:51:03 +00:00
Mateusz Szewczyk
80f8fe1793 docs: update IBM WatsonxLLM docs with deprecated LLMChain (#21960)
Thank you for contributing to LangChain!

- [x] **PR title**: "update IBM WatsonxLLM docs with deprecated
LLMChain"

- [x] **PR message**: 
- **Description:** update IBM WatsonxLLM docs with deprecated LLMChain

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-05-21 16:43:02 -07:00
Surya Rath
eb096675a8 OpenAI Assistants v2 api support for OpenAIAssistantRunnable (#21484)
**Title**: "langchain: OpenAI Assistants v2 api support"

***Descriptions*** 
- [x] "attachments" support added along with backward compatibility of
"file_ids"
- [x]  "tool_resources" support added while creating new assistant

- [ ] "tool_choice" parameter support
- [ ]  Streaming support


- **Dependencies:** OpenAI v2 API (openai>=1.23.0)
- **Twitter handle:** @skanta_rath

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-21 15:32:29 -07:00
Erick Friis
80ea48fae1 Merge branch 'master' into erick/docs-rewrite-contributor-docs 2024-05-21 15:27:08 -07:00
Eugene Yurtsev
7a5d042bd2 langchain[patch]: Add unit test to detect changes to community imports (#21998)
Add unit tests for community imports
2024-05-21 17:45:26 -04:00
Eugene Yurtsev
90f4d8842f langchain[patch]: Turn on all deprecations for 0.2 (#21999)
- Turn on all 0.2 import deprecations.
- Update error messag with URL to upgrade instructions.
2024-05-21 17:33:43 -04:00
Asaf Joseph Gardin
a042e804b4 ai21: AI21 Jamba docs (#21978)
- Updated docs to have an example to use Jamba instead of J2

---------

Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-21 19:27:46 +00:00
Erick Friis
997aec6804 Merge branch 'master' into erick/docs-rewrite-contributor-docs 2024-05-21 12:16:10 -07:00
Pengcheng Liu
4cf523949a community[patch]: Update model client to support vision model in Tong… (#21474)
- **Description:** Tongyi uses different client for chat model and
vision model. This PR chooses proper client based on model name to
support both chat model and vision model. Reference [tongyi
document](https://help.aliyun.com/zh/dashscope/developer-reference/tongyi-qianwen-vl-plus-api?spm=a2c4g.11186623.0.0.27404c9a7upm11)
for details.

```
from langchain_core.messages import HumanMessage
from langchain_community.chat_models import ChatTongyi

llm = ChatTongyi(model_name='qwen-vl-max')
image_message = {
    "image": "https://lilianweng.github.io/posts/2023-06-23-agent/agent-overview.png"
}
text_message = {
    "text": "summarize this picture",
}
message = HumanMessage(content=[text_message, image_message])
llm.invoke([message])
```

- **Issue:** None
- **Dependencies:** None
- **Twitter handle:** None
2024-05-21 11:58:27 -07:00
Erick Friis
98b64f3ae3 infra: only tag core releases as github latest (#21991) 2024-05-21 11:39:03 -07:00
Sevin F. Varoglu
1bc0ea5496 community[patch]: update OctoAIEmbeddings to subclass OpenAIEmbeddings (#21805) 2024-05-21 11:29:41 -07:00
Eugene Yurtsev
ded53297e0 core[patch]: Add unit test for RunnableGenerator for eventstream v2 (#21990)
No unit tests with runnable generator
2024-05-21 14:29:15 -04:00
Nuno Campos
fb6108c8f5 core[patch]: In astream_events(version=v2) tap output of root run (#21977)
- if tap_output_iter/aiter is called multiple times for the same run
issue events only once
- if chat model run is tapped don't issue duplicate on_llm_new_token
events
- if first chunk arrives after run has ended do not emit it as a stream
event

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-05-21 14:03:57 -04:00
Bagatur
72d4a8eeed community[patch]: AzureSearch dont overwrite default async (#21989) 2024-05-21 11:01:28 -07:00
ccurme
a983465694 docs: set default anthropic model (#21988)
`ChatAnthropic()` raises ValidationError.
2024-05-21 11:01:18 -07:00
Muhammed Al-Dulaimi
5448e16fe6 Fix grammar error (#21985)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
2024-05-21 10:59:48 -07:00
ccurme
4be5537837 Revert "anthropic: set default model" (#21987)
Reverts langchain-ai/langchain#21986
2024-05-21 17:28:32 +00:00
ccurme
35439cf3bd anthropic: set default model (#21986)
Various docs reference `ChatAnthropic()`, but this currently raises
ValidationError.
2024-05-21 17:24:31 +00:00
ccurme
0923136851 langchain: default to Runnable in MultiQueryRetriever (#21770)
- `llm_chain` becomes `Union[LLMChain, Runnable]`
- `.from_llm` creates a runnable

tested by verifying that docs/how_to/MultiQueryRetriever.ipynb runs
unchanged with sync/async invoke (and that it runs if we specifically
instantiate with LLMChain).
2024-05-21 17:01:05 +00:00
Yulong Wang
8e1aeb8ad5 community[patch]: Fix typo in arxiv tool's doc (#21970)
Fix typo in arxiv tool's doc
2024-05-21 13:44:59 +00:00
Erick Friis
d4ca8edf85 x 2024-05-20 20:07:38 -07:00
Erick Friis
d858c49238 x 2024-05-20 19:49:50 -07:00
Erick Friis
d2723ffda9 Merge branch 'master' into erick/docs-rewrite-contributor-docs 2024-05-20 19:43:07 -07:00
Robert Caulk
54adcd9e82 community[minor]: add AskNews retriever and AskNews tool (#21581)
We add a tool and retriever for the [AskNews](https://asknews.app)
platform with example notebooks.

The retriever can be invoked with:

```py
from langchain_community.retrievers import AskNewsRetriever

retriever = AskNewsRetriever(k=3)

retriever.invoke("impact of fed policy on the tech sector")
```

To retrieve 3 documents in then news related to fed policy impacts on
the tech sector. The included notebook also includes deeper details
about controlling filters such as category and time, as well as
including the retriever in a chain.

The tool is quite interesting, as it allows the agent to decide how to
obtain the news by forming a query and deciding how far back in time to
look for the news:

```py
from langchain_community.tools.asknews import AskNewsSearch
from langchain import hub
from langchain.agents import AgentExecutor, create_openai_functions_agent
from langchain_openai import ChatOpenAI

tool = AskNewsSearch()

instructions = """You are an assistant."""
base_prompt = hub.pull("langchain-ai/openai-functions-template")
prompt = base_prompt.partial(instructions=instructions)
llm = ChatOpenAI(temperature=0)
asknews_tool = AskNewsSearch()
tools = [asknews_tool]
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
)

agent_executor.invoke({"input": "How is the tech sector being affected by fed policy?"})
```

---------

Co-authored-by: Emre <e@emre.pm>
2024-05-20 18:23:06 -07:00
Jesse S
fc79b372cb community[minor]: add aerospike vectorstore integration (#21735)
Please let me know if you see any possible areas of improvement. I would
very much appreciate your constructive criticism if time allows.

**Description:**
- Added a aerospike vector store integration that utilizes
[Aerospike-Vector-Search](https://aerospike.com/products/vector-database-search-llm/)
add-on.
- Added both unit tests and integration tests
- Added a docker compose file for spinning up a test environment
- Added a notebook

 **Dependencies:** any dependencies required for this change
- aerospike-vector-search

 **Twitter handle:** 
- No twitter, you can use my GitHub handle or LinkedIn if you'd like

Thanks!

---------

Co-authored-by: Jesse Schumacher <jschumacher@aerospike.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-21 01:01:47 +00:00
Prince Canuma
3587c60396 community[patch]: Fix MLX LLM Stream (#20575)
Closes #20561

This PR fixes MLX LLM stream `AttributeError`. 

Recently, `mlx-lm` changed the token decoding logic, which affected the
LC+MLX integration.

Additionally, I made minor fixes such as: docs example broken link and
enforcing pipeline arguments (max_tokens, temp and etc) for invoke.
   
- **Issue:** #20561
    
- **Twitter handle:** @Prince_Canuma
2024-05-20 17:17:08 -07:00
Rahul Triptahi
96bd0b0844 community[patch]: Remove redundant pebblo cloud api call (#21589)
Description: removed redundant pebblo cloud api call. Changed classified
`doc` key to `ai_apps_data`.
Documentation: N/A
Unit tests: N/A
2024-05-20 17:15:16 -07:00
Param Singh
d07885f8b7 community[patch]: standardized sparkllm init args (#21633)
Related to #20085 
@baskaryan 

Thank you for contributing to LangChain!

community:sparkllm[patch]: standardized init args

updated `spark_api_key` so that aliased to `api_key`. Added integration
test for `sparkllm` to test that it continues to set the same underlying
attribute.

updated temperature with Pydantic Field, added to the integration test.

Ran `make format`,`make test`, `make lint`, `make spell_check`
2024-05-20 17:11:36 -07:00
Dhruv Chawla
d4359d3de6 community[patch]: Update UpTrain Callback Handler to support the new UpTrain evaluation schema (#21656)
UpTrain has a new dashboard now that makes it easier to view projects
and evaluations. Using this requires specifying both project_name and
evaluation_name when performing evaluations. I have updated the code to
support it.
2024-05-20 17:06:00 -07:00
Alex Riina
c0e3c3a350 openai[patch], community[patch]: add pricing and max context window for GPT-4o (#21673)
# Add pricing and max context window for GPT-4o
- community: add cost per 1k tokens and max context window
- partners: add max context window

**Description:** adds static information about GPT-4o based on
https://openai.com/api/pricing/ and
https://platform.openai.com/docs/models/gpt-4o so that GPT-4o reporting
is accurate.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-20 23:47:43 +00:00
缨缨
bd39b2ccdf community: enable SupabaseVectorStore to support extended table fields (#21762)
Thank you for contributing to LangChain!

- [x] **PR title**: "community: enable SupabaseVectorStore to support
extended table fields"

- [x] **PR message**: 
- Added extension fields to the function _add_vectors so that users can
add other custom fields when insert a record into the database. eg:
    

![image](https://github.com/langchain-ai/langchain/assets/10885578/e1d5ca20-936e-4cab-ba69-8fdd23b8ce8f)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-20 16:32:26 -07:00
Jerome Choo
2316635add docs: Clean up Diffbot docs (#21781)
The Diffbot DocumentLoader page doesn't actually run for a number of
reasons. This PR fixes it along with some light details on the Graph
Transformer and Provider pages.

## Full Changelog

[Document Loader
Page](https://python.langchain.com/v0.1/docs/integrations/document_loaders/diffbot/)
* Fixed the notebook so that it actually runs (missing required modules,
env variables, etc..)
* Added "open in colab" button like the Graph Transformer page

[Graph Transformer
Page](https://python.langchain.com/v0.2/docs/integrations/graphs/diffbot/)
* Fixed broken colab link
* Moved "open in colab" button to below description so the description
in the [Graphs category
page](https://python.langchain.com/v0.2/docs/integrations/graphs/) shows
up correctly

[Provider
Page](https://python.langchain.com/v0.2/docs/integrations/providers/diffbot/)
* Clarified explanations of Diffbot products
* Added section and link to LangChain Graph Transformer page

---------

Co-authored-by: jeromechoo <hello@jeromechoo.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-20 23:09:22 +00:00
Rohan Aggarwal
d8a101074f docs: updates for OracleDB (#21745)
Thank you for contributing to LangChain!

Documentation change for OracleDB

Fixed several things in Oracle Documentation.
2024-05-20 16:01:35 -07:00
Leonid Ganeline
9799437bc2 docs: YouTube page update (#21780)
Greatly simplified to get a cleaner look.
Only the YouTube pages with 40K+ views.
2024-05-20 15:50:41 -07:00
Leonid Ganeline
e98a4fd19a ai21[patch]: configuration fix (#21790)
added "repository" and "Source Code" parameters (these parameters are
missed only in this partner package configuration).
2024-05-20 15:49:38 -07:00
Trayan Azarov
f54cbf8ff5 chroma[patch]: Chroma - remove reference to collection upon delete_collection (#21817)
**Description**:

- Reference to `Collection` object is set to `None` when deleting a
collection `delete_collection()`
- Added utility method `reset_collection()` to allow recreating the
collection
- Moved collection creation out of `__init__` into
`__ensure_collection()` to be reused by object init and
`reset_collection()`
- `_collection` is now a property to avoid breaking changes

**Issues**: 

- chroma-core/chroma#2213

**Twitter**: @t_azarov
2024-05-20 15:42:36 -07:00
Jens
b0b302ec6b community[patch]: fixed aleph alpha default emedding request (#21826)
- **Description:** In the aleph alpha client the paramater `normalize`
is *not* optional. Setting this to `None` gives an error.
- **Dependencies:** None

Co-authored-by: Jens Lücke <jens.luecke@tngtech.com>
Co-authored-by: Jens <jens.luecke@hu-berlin.de>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-20 22:39:43 +00:00
Leonid Ganeline
6a59f76f2b docs: added template to arxiv page (#21846)
Updated `arXiv` page with the arxiv references from Templates (were
references from Docs and API Refs, not Templates).
Re #21450 
CC @eyurtsev
2024-05-20 15:30:35 -07:00
Jorge Piedrahita Ortiz
e6207ad4f3 community[patch]: Sambanova integration api update (#21848)
- **Description:**:
        SambaStudio generic endpoint compatibility added
        Improved error description, and handling
        streaming examples added
2024-05-20 15:29:59 -07:00
Bagatur
c6da9533ac docs: correct langserve link (#21940) 2024-05-20 22:15:31 +00:00
Michael Reed
7a5e1bcf99 core[patch]: Fix NPE in function_calling._get_python_function_required_args (#21863)
Example error message:
line 206, in _get_python_function_required_args
    if is_function_type and required[0] == "self":
                            ~~~~~~~~^^^
IndexError: list index out of range

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-20 22:06:27 +00:00
Liuww
332ffed393 community[patch]: Adopting the lighter-weight xinference_client (#21900)
While integrating the xinference_embedding, we observed that the
downloaded dependency package is quite substantial in size. With a focus
on resource optimization and efficiency, if the project requirements are
limited to its vector processing capabilities, we recommend migrating to
the xinference_client package. This package is more streamlined,
significantly reducing the storage space requirements of the project and
maintaining a feature focus, making it particularly suitable for
scenarios that demand lightweight integration. Such an approach not only
boosts deployment efficiency but also enhances the application's
maintainability, rendering it an optimal choice for our current context.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-20 22:05:09 +00:00
Tomaz Bratanic
a43515ca65 experimental[patch]: Pass enum only to openai in llm graph transformer (#21860)
Some models like Groq return bad request if you pass in `enum` parameter
in tool definition
2024-05-20 15:02:48 -07:00
Ozan Kaşıkçı
aab9cb666f docs: Update agents.ipynb, add missing word "see" (#21872)
- **Description:** Add missing see word in the docs
2024-05-20 22:00:03 +00:00
Erick Friis
7d307e6a22 docs: rewrite contributor docs 2024-05-20 14:57:49 -07:00
Jiří Spilka
6499897c87 community[patch]: update apify integration to attribute API activity to langchain (#21909)
**Description:** Add `Origin/langchain` to Apify's client's user-agent
to attribute API activity to LangChain (at Apify, we aim to monitor our
integrations to evaluate whether we should invest more in the LangChain
integration regarding functionality and content)

**Issue:** None
**Dependencies:** None
**Twitter handle:** None
2024-05-20 14:49:23 -07:00
Mohammad Mohtashim
711b8f1e52 docs: HuggingFace Endpoint Documentation Fixed (#21914)
Fixed Documentation for HuggingFaceEndpoint as per the issue #21903

---------

Co-authored-by: keenborder786 <mohammad.mohtashim78@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-20 21:23:28 +00:00
Jared Van Bortel
25d1c1c9bb nomic: implement local embeddings with the inference_mode parameter (#21934)
## Description

This PR implements local and dynamic mode in the Nomic Embed integration
using the inference_mode and device parameters. They work as documented
[here](https://docs.nomic.ai/reference/python-api/embeddings#local-inference).

<!-- If no one reviews your PR within a few days, please @-mention one
of baskaryan, efriis, eyurtsev, hwchase17. -->

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-05-20 14:17:07 -07:00
ccurme
0e72ed39a0 infra: fix CI on text-splitters (#21935) 2024-05-20 14:03:42 -07:00
Ozan Kaşıkçı
f4ffef98a2 docs: how to: tool calling: Fix typo in sentence (#21877)
- **Description:** Fix grammar error.
2024-05-20 20:58:52 +00:00
Erick Friis
6b97418836 docs: rewrite old home, fix v0.1 infinite redirect (#21936) 2024-05-20 13:44:41 -07:00
Bagatur
1418d3af00 docs: link to langsmith+langgraph docs (#21930) 2024-05-20 13:05:22 -07:00
ccurme
e8bdf245eb update maintainers (#21305) 2024-05-20 19:07:53 +00:00
ccurme
4470d3b4a0 partners: bump core in packages implementing ls_params (#21868)
These packages all import `LangSmithParams` which was released in
langchain-core==0.2.0.

N.B. we will need to release `openai` and then bump `langchain-openai`
in `together` and `upstage`.
2024-05-20 11:51:43 -07:00
junefish
0614a53d9c docs: update notebook for latest Pinecone API + serverless (#21921)
Thank you for contributing to LangChain!

- [x] **PR title**: "docs: update notebook for latest Pinecone API +
serverless"


- [x] **PR message**: Published notebook is incompatible with latest
`pinecone-client` and not runnable. Updated for use with latest Pinecone
Python SDK. Also updated to be compatible with serverless indexes (only
index type available on Pinecone free tier).


- [x] **Add tests and docs**: N/A (tested in Colab)


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.


---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1207328087952499
2024-05-20 11:51:03 -07:00
ccurme
9c76739425 mistral: implement ls_params (#21867) 2024-05-20 11:49:48 -07:00
junefish
68a90e2252 docs: update notebook for new Pinecone API + serverless (#21923)
Thank you for contributing to LangChain!

- [x] **PR title**: "docs: update notebook for new Pinecone API +
serverless"


- [x] **PR message**: The published notebook is not runnable after
`pinecone-client` v2, which is deprecated. `langchain-pinecone` is not
compatible with the latest `pinecone-client` (v4), so I hardcoded it to
the last v3. Also updated for serverless indexes (only index type
available on Pinecone free plan).


- [x] **Add tests and docs**: N/A (tested in Colab)


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.


---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
  - https://app.asana.com/0/0/1207328087952500
2024-05-20 11:48:55 -07:00
Eugene Yurtsev
8ed2ba9301 docs: migrate integrations using langchain-cli (#21929)
Migrate integration docs
2024-05-20 18:14:49 +00:00
Eugene Yurtsev
c98bd8505f docs: migrate tutorials using langchain-cli migrate (#21928)
Migrate tutorials
2024-05-20 13:45:35 -04:00
Eugene Yurtsev
b2f58d37db docs: run migration script against how-to docs (#21927)
Upgrade imports in how-to docs
2024-05-20 17:32:59 +00:00
Tomaz Bratanic
d85e46321a community[patch]: Better error message for neo4j vector when text is null (#21861) 2024-05-20 10:25:58 -07:00
Stefano Lottini
f2e75f9500 cli[minor]: fix import path for two Astra DB classes in the migration json data (#21926)
This PR fixes two mistakes in the import paths from community for the
json data aiding the cli migration to 0.2.

It is intended as a quick follow-up to
https://github.com/langchain-ai/langchain/pull/21913 .

@nicoloboschi FYI
2024-05-20 12:25:10 -04:00
WilliamEspegren
30bca57aae doc list not empty (#21208)
Make sure the doc list is not empty, and set Metadata: true in param, to
enable the user to disable metadata for slightly faster crawls.
2024-05-20 08:24:06 -07:00
David Charles
8da35fba7f langchain[minor]: add libs/partners to dev.Dockerfile (#21902)
Resolves #21886 by adding "COPY libs/partners ../partners/" to
libs/dev.Dockerfile

Twitter: @kabakongo
2024-05-20 15:20:56 +00:00
Eugene Yurtsev
8530bbac2d docs: update how to install (#21920)
Fix installation instructions in how-to install
2024-05-20 15:14:20 +00:00
TJ
8cd6ed3e1e community[patch]: Update documentation string in databricks chat model (#21915)
Update typos in documentation string in databricks chat model
2024-05-20 14:33:57 +00:00
Maxime Perrin
5ae982145e docs: fix wrong langchain-cli migration commands (#21906)
Co-authored-by: Maxime Perrin <mperrin@doing.fr>
2024-05-20 10:29:50 -04:00
Nicolò Boschi
dd00aac7ad cli[minor]: add astradb in the cli migration to 0.2 (#21913)
astradb has a new partner package but the automatic migration cli tool
doesn't take care of migration astradb integrations
2024-05-20 10:29:17 -04:00
Jacob Lee
242eeb537f docs[patch]: Adds callback docs (#21889)
@efriis @hwchase17
2024-05-19 21:57:33 -07:00
Jacob Lee
da4fef8131 docs[patch]: Update 0.2 banner copy (#21888)
@nfcampos
2024-05-19 17:21:02 -07:00
Coozywana
b6c8b6f944 Fix base.py typo (#21862)
ChatOpenaAI --> ChatOpenAI

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-18 13:05:02 +00:00
fzowl
d3624eaba1 partners: Remove unnecessary print from voyageai embeddings (#21865)
Thank you for contributing to LangChain!

Remove unnecessary print from voyageai embeddings

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-18 08:57:17 -04:00
Eugene Yurtsev
61ebe7991c docs: how to remove conversion to openai function from index (#21836)
- bind_tools interface is a better alternative.
- openai doesn't use functions but tools in its API now.
- the underlying content appears in some redirects, so will need to
investigate if we can remove.
2024-05-17 23:00:07 -04:00
Eugene Yurtsev
0812723789 docs: how to tools human in the loop (#21858)
Update information in how to guide tools human in the loop.
2024-05-17 22:59:51 -04:00
Eugene Yurtsev
875230d5bc docs: how-to index page fix minor typo (#21859)
Fix typo
2024-05-17 22:45:47 -04:00
Bagatur
8b3c5f93f5 docs: lcel how to and cheatsheet (#21851) 2024-05-17 19:04:45 -07:00
Erick Friis
c3caec5aaf docs: update announcement bar (#21854) 2024-05-18 00:35:07 +00:00
Jacob Lee
0180716a95 docs[patch]: Remove padding from first sidebar link (#21852)
CC @efriis
2024-05-17 17:09:58 -07:00
Nuno Campos
b1e7b40b6a core: Tap output of sync iterators for astream_events (#21842)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-17 16:57:41 -07:00
Erick Friis
9a39f92aba docs: v0.2 version sidebar (#21844)
![image](https://github.com/langchain-ai/langchain/assets/9557659/189f2e04-0c08-4395-b729-f48982c6f53b)
2024-05-17 23:45:51 +00:00
Max Jakob
e6b7a1769b docs: update Elasticsearch strategy names (#21530)
Update documentation with the [new names for retrieval
strategies](https://github.com/langchain-ai/langchain-elastic/pull/22)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-17 23:21:46 +00:00
Erick Friis
cdc8e2d0c2 docs: resolve local links script escape (#21840)
Fixing warnings. Needs to be propagated to 0.1 branch if this works.

![Screenshot 2024-05-17 at 2 34
15 PM](https://github.com/langchain-ai/langchain/assets/9557659/e6ac95a9-5686-4747-9ab8-4cb49942dc8d)
2024-05-17 22:59:27 +00:00
Erick Friis
d02380c504 docs: remove postgres from docs build (#21847) 2024-05-17 15:36:35 -07:00
Eugene Yurtsev
67b6f6c82a core[patch]: Check if event loop is closed in memory stream (#21841)
Check if event stream is closed in memory loop.

Using try/except here to avoid race condition, but this may incur a
small overhead in versions prios to 3.11
2024-05-17 21:53:59 +00:00
Erick Friis
d8f89a5e9b docs: fix vercel core dep 2 (#21839) 2024-05-17 14:24:25 -07:00
Erick Friis
5285336cb1 docs: fix vercel core dep (#21837) 2024-05-17 14:18:57 -07:00
Erick Friis
2d3f4e1a16 experimental: release 0.0.59 (#21835) 2024-05-17 21:02:45 +00:00
Erick Friis
169f525cfb community: release 0.2.0 (#21834) 2024-05-17 13:49:29 -07:00
Eugene Yurtsev
2656bfe941 docs: how to guide tool calling using prompts (#21827)
Update tool calling using prompts.

- Add required concepts
- Update names of tool invoking function.
- Add doc-string to function, and add information about `config` (which
users often forget)
- Remove steps that show how to use single function only. This makes the
how-to guide a bit shorter and more to the point.
- Add diagram from another how-to guide that shows how the thing works
overall.
2024-05-17 16:46:59 -04:00
Erick Friis
e5046cbd72 langchain: release 0.2.0, fix min deps (#21833) 2024-05-17 13:40:51 -07:00
Erick Friis
1b555021f7 text-splitters: release 0.2.0 (#21832) 2024-05-17 13:30:54 -07:00
Erick Friis
0ad8de5eb7 langchain: release 0.2.0 (#21831) 2024-05-17 13:18:31 -07:00
Eugene Yurtsev
33dbad02fe docs: update how-to for built in tools and toolkits (#21828)
Fix some typos
2024-05-17 16:05:28 -04:00
Erick Friis
23310626b3 core: release 0.2.0 (#21829) 2024-05-17 13:04:39 -07:00
Eugene Yurtsev
e3f30b4cde docs: clean up link to bing search (#21825)
Documentation should be inlined, not linking to medium article.
2024-05-17 19:06:56 +00:00
Eugene Yurtsev
22d9aed508 docs: how to tools, merge built in tools and toolkits (#21824)
* Rename tools to built in tools
* Merge built in tools and toolkits
* Update links from providers
2024-05-17 14:35:57 -04:00
Leonid Ganeline
c4508ca7ef docs: arXiv references page (#21450)
Since the LangChain based on many research papers, the LC documentation
has several references to the arXiv papers. It would be beneficial to
create a single page with all referenced papers.
PR:
1. Developed code to search the arXiv references in the LangChain
Documentation and the LangChain code base. Those references are included
in a newly generated documentation page.
2. Page is linked to the Docs menu.

Controversial:
1. The `arxiv_references` page is automatically generated. But this
generation now started only manually. It is not included in the doc
generation scripts. The reason for this is simple. I don't want to
mangle into the current documentation refactoring. If you think, we need
to regenerate this page in each build, let me know. Note: This script
has a dependency on the `arxiv` package.
2. The link for this page in the menu is not obvious.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-05-17 18:28:57 +00:00
ccurme
181dfef118 core, standard tests, partner packages: add test for model params (#21677)
1. Adds `.get_ls_params` to BaseChatModel which returns
```python
class LangSmithParams(TypedDict, total=False):
    ls_provider: str
    ls_model_name: str
    ls_model_type: Literal["chat"]
    ls_temperature: Optional[float]
    ls_max_tokens: Optional[int]
    ls_stop: Optional[List[str]]
```
by default it will only return
```python
{ls_model_type="chat", ls_stop=stop}
```

2. Add these params to inheritable metadata in
`CallbackManager.configure`

3. Implement `.get_ls_params` and populate all params for Anthropic +
all subclasses of BaseChatOpenAI

Sample trace:
https://smith.langchain.com/public/d2962673-4c83-47c7-b51e-61d07aaffb1b/r

**OpenAI**:
<img width="984" alt="Screenshot 2024-05-17 at 10 03 35 AM"
src="https://github.com/langchain-ai/langchain/assets/26529506/2ef41f74-a9df-4e0e-905d-da74fa82a910">

**Anthropic**:
<img width="978" alt="Screenshot 2024-05-17 at 10 06 07 AM"
src="https://github.com/langchain-ai/langchain/assets/26529506/39701c9f-7da5-4f1a-ab14-84e9169d63e7">

**Mistral** (and all others for which params are not yet populated):
<img width="977" alt="Screenshot 2024-05-17 at 10 08 43 AM"
src="https://github.com/langchain-ai/langchain/assets/26529506/37d7d894-fec2-4300-986f-49a5f0191b03">
2024-05-17 13:51:26 -04:00
Eugene Yurtsev
4ca2149b70 docs: Remove duplicated content from how to tools (#21821)
Content is duplicated, and is covered in how to use chat models.
2024-05-17 17:30:43 +00:00
Matthew Koski
e59afe292d langchain: Fixing import in docs per https://github.com/langchain-ai/langchain/issues/21814 (#21815)
Description: The example in the How-To guide had an import which did not
work. I changed it to use an import from langchain_core.

Issue: https://github.com/langchain-ai/langchain/issues/21814
2024-05-17 17:19:57 +00:00
Sen Lin
eb7f07ae36 community[patch]: fix typo in ValueError message in load_local function (#21818)
**Description:**
Corrected an error in the `allow_dangerous_deserialization` message
within the `load_local` functions
2024-05-17 17:19:04 +00:00
Jorge Piedrahita Ortiz
700b1c7212 community: sambaverse api update (#21816)
- **Description:** fix sambaverse integration to make it compatible with
sambaverse API update / minor changes in docs
2024-05-17 10:18:08 -07:00
Erick Friis
7976fb1663 docs: cookbook redirect (#21822) 2024-05-17 17:07:30 +00:00
maang-h
9f8d18c028 community[patch]: Fix unintended newline in print statement in exception for BaichuanTextEmbeddings (#21820)
- **Code:** langchain_community/embeddings/baichuan.py:82
- **Description:** When I make an error using 'baichuan embeddings', the
printed error message is wrapped (there is actually no need to wrap)
```python
# example
from langchain_community.embeddings import BaichuanTextEmbeddings

# error key
BAICHUAN_API_KEY = "sk-xxxxxxxxxxxxx"
embeddings = BaichuanTextEmbeddings(baichuan_api_key=BAICHUAN_API_KEY)

text_1 = "今天天气不错"
query_result = embeddings.embed_query(text_1)
```



![unintended
newline](https://github.com/langchain-ai/langchain/assets/55082429/e1178ce8-62bb-405d-a4af-e3b28eabc158)
2024-05-17 16:38:38 +00:00
Eugene Yurtsev
aa648298ae docs: minor updates to migration docs (#21819)
Minor aesthetic updates to migration docs
2024-05-17 12:28:56 -04:00
Eugene Yurtsev
fc644c0e1c docs: Update v0.2 information (#21796)
Update information about v0.2 upgrade
2024-05-17 11:43:58 -04:00
Bakar Tavadze
3b5ac44e03 langchain-robocorp[minor]: Enable passing additional headers to the action server. (#21809)
Actions can optionally receive secrets via request headers. This PR
enables this functionality.
2024-05-17 15:08:48 +00:00
Erick Friis
09919c2cd5 docs: version dropdown (#21784) 2024-05-16 17:01:34 -07:00
Chad Juliano
685c13e157 docs: fix errors and table formatting in notebook (#21696)
There are 2 issues fixed here:

* In the notebook pandas dataframes are formatted as HTML in the cells.
On the documentation site the renderer that converts notebooks
incorrectly displays the raw HTML. I can't find any examples of where
this is working and so I am formatting the dataframes as text.

* Some incorrect table names were referenced resulting in errors.
2024-05-16 16:00:14 -07:00
Asaf Joseph Gardin
f3289b898c partners: Revert AI21 Labs docs scan feature (#21699)
Description: Reverted commit #21614

---------

Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-16 22:58:40 +00:00
github-user-en
ec8d406441 Made a grammatical correction in streaming.ipynb (#21707)
The only change is replacing the word "operators" with "operates," to
make the sentence grammatically correct.

Thank you for contributing to LangChain!

- [x] **PR title**: "docs: Made a grammatical correction in
streaming.ipynb to use the word "operates" instead of the word
"operators""


- [x] **PR message**: 
- **Description:** The use of the word "operators" was incorrect, given
the context and grammar of the sentence. This PR updates the
documentation to use the word "operates" instead of the word
"operators".
    - **Issue:** Makes the documentation more easily understandable.
    - **Dependencies:** -no dependencies-
    - **Twitter handle:** --


- [x] **Add tests and docs**: Since no new integration is being made, no
new tests/example notebooks are required.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
    - **No formatting changes made to the documentation**

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-16 22:47:40 +00:00
Brace Sproul
6febb283f6 docs[minor]: Hide prev/next buttons on docs in how to / tutorials (#21789)
These buttons don't navigate to the proper prev/next page. Hide in those
pages
2024-05-16 15:35:17 -07:00
Eugene Yurtsev
8607735b80 langchain[patch],community[patch]: Move unit tests that depend on community to community (#21685) 2024-05-16 17:24:27 -04:00
Eugene Yurtsev
97a4ae50d2 How To: Custom tools (#21725)
- Remove double implementations of functions. The single input is just
taking up space.
- Added tool specific information for `async + showing invoke vs.
ainvoke.
- Added more general information about about `async` (this should live
in a different place eventually since it's not specific to tools).
- Changed ordering of custom tools (StructuredTool is simpler and should
appear before the inheritance)
- Improved the error handling section (not convinced it should be here
though)
2024-05-16 21:06:33 +00:00
Bagatur
1cf80a5956 docs: link runnable api (#21783) 2024-05-16 20:49:37 +00:00
Bagatur
aee3842a21 docs: intro nit (#21785) 2024-05-16 13:46:11 -07:00
Marco Lamina
d0fae6cd54 community: Add token cost for GPT-4o model (#21771)
Adding [token cost for the new GPT-4o
model](https://openai.com/api/pricing/):
* Input cost US$5.00 / 1M tokens
* Output cost US$15.00 / 1M tokens
2024-05-16 20:36:23 +00:00
Bagatur
4231cf0696 docs: update chat feat table (#21778) 2024-05-16 12:58:51 -07:00
Massimiliano Pronesti
0c0db7c5db feat(community): support semantic hybrid score threshold in Azure AI Search (#21527)
Support semantic hybrid search with a score threshold -- similar to what
we do for similarity search and for hybrid search (#20907).
2024-05-16 15:54:32 -04:00
Erick Friis
5e445a7e4e docs: dont rewrite ipynb links that have double slash (#21775) 2024-05-16 19:06:30 +00:00
Eugene Yurtsev
e3a03b324d docs: concepts -- add information about tool calling models, update tools section (#21760)
- Add information about naitve tool calling capabilities
- Add information about standard langchain interface for tool calling
- Update description for tools

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-16 15:03:25 -04:00
Bagatur
6416d16d39 anthropic[patch]: Release 0.1.13, tool_choice support (#21773) 2024-05-16 17:56:29 +00:00
Stefano Lottini
040597e832 community: init signature revision for Cassandra LLM cache classes + small maintenance (#17765)
This PR improves on the `CassandraCache` and `CassandraSemanticCache`
classes, mainly in the constructor signature, and also introduces
several minor improvements around these classes.

### Init signature

A (sigh) breaking change is tentatively introduced to the constructor.
To me, the advantages outweigh the possible discomfort: the new syntax
places the DB-connection objects `session` and `keyspace` later in the
param list, so that they can be given a default value. This is what
enables the pattern of _not_ specifying them, provided one has
previously initialized the Cassandra connection through the versatile
utility method `cassio.init(...)`.

In this way, a much less unwieldy instantiation can be done, such as
`CassandraCache()` and `CassandraSemanticCache(embedding=xyz)`,
everything else falling back to defaults.

A downside is that, compared to the earlier signature, this might turn
out to be breaking for those doing positional instantiation. As a way to
mitigate this problem, this PR typechecks its first argument trying to
detect the legacy usage.
(And to make this point less tricky in the future, most arguments are
left to be keyword-only).

If this is considered too harsh, I'd like guidance on how to further
smoothen this transition. **Our plan is to make the pattern of optional
session/keyspace a standard across all Cassandra classes**, so that a
repeatable strategy would be ideal. A possibility would be to keep
positional arguments for legacy reasons but issue a deprecation warning
if any of them is actually used, to later remove them with 0.2 - please
advise on this point.

### Other changes

- class docstrings: enriched, completely moved to class level, added
note on `cassio.init(...)` pattern, added tiny sample usage code.
- semantic cache: revised terminology to never mention "distance" (it is
in fact a similarity!). Kept the legacy constructor param with a
deprecation warning if used.
- `llm_caching` notebook: uniform flow with the Cassandra and Astra DB
separate cases; better and Cassandra-first description; all imports made
explicit and from community where appropriate.
- cache integration tests moved to community (incl. the imported tools),
env var bugfix for `CASSANDRA_CONTACT_POINTS`.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-16 17:22:24 +00:00
fzowl
8db4a14648 docs: new voyageai text_embeddings model: voyage-large-2-instruct (#21706) 2024-05-16 10:06:22 -07:00
Bagatur
901e09aa30 docs: datacamp course (#21767) 2024-05-16 16:56:32 +00:00
Kyle Cassidy
eca8c4bcc6 Standardized openai init params (#21739)
## Patch Summary
community:openai[patch]: standardize init args

## Details
I made changes to the OpenAI Chat API wrapper test in the Langchain
open-source repository

- **File**: `libs/community/tests/unit_tests/chat_models/test_openai.py`
- **Changes**:
  - Updated `max_retries` with Pydantic Field
  - Updated the corresponding unit test
- **Related Issues**: #20085
  - Updated max_retries with Pydantic Field, updated the unit test.

---------

Co-authored-by: JuHyung Son <sonju0427@gmail.com>
2024-05-16 16:30:52 +00:00
laishzh
c03fd93fc1 docs: Remove unnecessary comment marks from the Makefile help section (#21749)
**Previous screenshot:**
<img width="758" alt="image"
src="https://github.com/langchain-ai/langchain/assets/1683919/7b90626e-35ab-4486-b41d-b664e69eec0b">

**Current:**
<img width="744" alt="image"
src="https://github.com/langchain-ai/langchain/assets/1683919/cdb69512-dc6c-4b7f-a466-4be92d94c076">
2024-05-16 09:05:44 -07:00
Ethan Yang
e44b448ec3 community: update openvino doc with streaming support (#21519)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-16 15:54:45 +00:00
Eugene Yurtsev
7022260bc5 How to: Streaming (#21715)
Update the how to guide on streaming

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-16 11:48:11 -04:00
ccurme
19e6bf814b community: fix CI (#21766) 2024-05-16 15:41:03 +00:00
Michael Ozery
dda5a9c97a docs: sql_qa.ipynb tutorial update (#21756)
1. Updated deprecated method usage.
2. Added LangGraph required installation in tutorial.

X: MichaelOzery
2024-05-16 15:23:20 +00:00
Mish Ushakov
d77e60a7f4 community: updated Browserbase loader (#21757)
Thank you for contributing to LangChain!

- [x] **PR title**: "community: updated Browserbase loader"

- [x] **PR message**:
    Updates the Browserbase loader with more options and improved docs.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-05-16 08:21:23 -07:00
Ikko Eltociear Ashimine
1e6517ba73 docs: update sql_large_db.ipynb (#21765)
mispelling -> misspelling
2024-05-16 15:20:55 +00:00
Eugene Yurtsev
6ed0aa3239 core[major]: only use function description (#21622)
Do not prefix function signature

---

* Reason for this is that information is already present with tool
calling models.
* This will save on tokens for those models, and makes it more obvious
what the description is!
* The @tool can get more parameters to allow a user to re-introduce the
the signature if we want
2024-05-16 11:17:53 -04:00
William FH
8498b41cda Finish agent migration doc (#21731) 2024-05-16 14:43:19 +00:00
Cheese
0ead09f84d community: Implement bind_tools for ChatTongyi (#20725)
## Description

Implement `bind_tools` in ChatTongyi. Usage example:

```py
from langchain_core.tools import tool
from langchain_community.chat_models.tongyi import ChatTongyi

@tool
def multiply(first_int: int, second_int: int) -> int:
    """Multiply two integers together."""
    return first_int * second_int

llm = ChatTongyi(model="qwen-turbo")

llm_with_tools = llm.bind_tools([multiply])

msg = llm_with_tools.invoke("What's 5 times forty two")

print(msg)
```

Streaming is also supported.

## Dependencies

No Dependency is required for this change.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-16 10:39:35 -04:00
yoogle
b216a1dddb docs: fix monorepo typo (#21761)
### Description
fix monorepo typo. `monorep` -> `monorepo`
2024-05-16 14:15:10 +00:00
Bagatur
347166874f docs: aca-ds nit (#21759) 2024-05-16 13:53:08 +00:00
Bagatur
867adbf27b docs: add aca-ds (#21746) 2024-05-16 08:52:07 +00:00
Bagatur
74f54599f4 docs: aza-ds cookbook (#21747) 2024-05-16 01:27:13 -07:00
Erick Friis
be15740084 fireworks: add secret (#21744) 2024-05-15 19:48:51 -07:00
Erick Friis
06110e20b9 pinecone: bump min core version (#21742) 2024-05-15 19:31:43 -07:00
Erick Friis
bd3e7d50f3 fireworks: bump min core version (#21741) 2024-05-15 19:29:13 -07:00
Erick Friis
1647b28a87 infra: release min version dont clobber current lib (#21740) 2024-05-15 19:27:39 -07:00
Erick Friis
f5c31078d7 airbyte[patch]: airbyte-cdk compatible pydantic versions (#21738) 2024-05-15 19:13:25 -07:00
Erick Friis
3d33b89fa4 ibm[patch]: release 0.1.7 (#21737) 2024-05-15 19:10:15 -07:00
Erick Friis
e41d801369 openai[patch]: fix embedding float precision issue (#21736)
also clean up + comment some of the embedding batching code
2024-05-16 02:06:51 +00:00
JuHyung Son
38c297a025 upstage: Support batch input in embedding request. (#21730)
**Description:** upstage embedding now supports batch input.
2024-05-15 18:13:44 -07:00
junefish
c5a981e3b4 docs: Update Pinecone example notebook with embedded widget (#21719)
---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-15 21:20:46 +00:00
Erick Friis
0aea7f4b1d docs: fix installation link (#21728) 2024-05-15 21:10:12 +00:00
Harrison Chase
15be439719 Harrison/move flashrank rerank (#21448)
third party integration, should be in community
2024-05-15 13:08:52 -07:00
Harrison Chase
c6c2649a5a move installation (#21711) 2024-05-15 12:59:45 -07:00
Erick Friis
aca98fd150 multiple: releases with relaxed core dep (#21724) 2024-05-15 19:29:35 +00:00
Bagatur
af284518bc openai[patch]: Release 0.1.7, bump tiktoken 0.7.0 (#21723) 2024-05-15 12:19:29 -07:00
Bagatur
0405933914 docs: add feedback link to 0.2 banner (#21600) 2024-05-15 10:53:48 -07:00
William FH
ca768c8353 [Core] Check is async callable (#21714)
To permit proper coercion of objects like the following:


```python
class MyAsyncCallable:
    async def __call__(self, foo):
        return await ...

class MyAsyncGenerator:
    async def __call__(self, foo):
        await ...
        yield 
```
2024-05-15 10:49:49 -07:00
ccurme
7128c2d8ad docs: add tutorial for vector stores and retrievers (#21683)
also update how-to guide for parent document retriever
2024-05-15 11:50:24 -04:00
Eugene Yurtsev
5c2cfabec6 core[minor]: Add v2 implementation of astream events (#21638)
This PR introduces a v2 implementation of astream events that removes
intermediate abstractions and fixes some issues with v1 implementation.

The v2 implementation significantly reduces relevant code that's
associated with the astream events implementation together with
overhead.

After this PR, the astream events implementation:

- Uses an async callback handler
- No longer relies on BaseTracer
- No longer relies on json patch

As a result of this re-write, a number of issues were discovered with
the existing implementation.

## Changes in V2 vs. V1

### on_chat_model_end `output`

The outputs associated with `on_chat_model_end` changed depending on
whether it was within a chain or not.

As a root level runnable the output was: 

```python
"data": {"output": AIMessageChunk(content="hello world!", id='some id')}
```

As part of a chain the output was:

```
            "data": {
                "output": {
                    "generations": [
                        [
                            {
                                "generation_info": None,
                                "message": AIMessageChunk(
                                    content="hello world!", id=AnyStr()
                                ),
                                "text": "hello world!",
                                "type": "ChatGenerationChunk",
                            }
                        ]
                    ],
                    "llm_output": None,
                }
            },
```

After this PR, we will always use the simpler representation:

```python
"data": {"output": AIMessageChunk(content="hello world!", id='some id')}
```

**NOTE** Non chat models (i.e., regular LLMs) are still associated with
the more verbose format.

### Remove some `_stream` events

`on_retriever_stream` and `on_tool_stream` events were removed -- these
were not real events, but created as an artifact of implementing on top
of astream_log.

The same information is already available in the `x_on_end` events.

### Propagating Names

Names of runnables have been updated to be more consistent

```python
  model = GenericFakeChatModel(messages=infinite_cycle).configurable_fields(
        messages=ConfigurableField(
            id="messages",
            name="Messages",
            description="Messages return by the LLM",
        )
    )
```

Before:
```python
"name": "RunnableConfigurableFields",
```

After:
```python
"name": "GenericFakeChatModel",
```

### on_retriever_end

on_retriever_end will always return `output` which is a list of
documents (rather than a dict containing a key called "documents")

### Retry events

Removed the `on_retry` callback handler. It was incorrectly showing that
the failed function being retried has invoked `on_chain_end`


https://github.com/langchain-ai/langchain/pull/21638/files#diff-e512e3f84daf23029ebcceb11460f1c82056314653673e450a5831147d8cb84dL1394
2024-05-15 11:48:47 -04:00
Rajendra Kadam
54e003268e langchain[minor]: Add PebbloRetrievalQA chain with Identity & Semantic Enforcement support (#20641)
- **Description:** PebbloRetrievalQA chain introduces identity
enforcement using vector-db metadata filtering
- **Dependencies:** None
- **Issue:** None
- **Documentation:** Adding documentation for PebbloRetrievalQA chain in
a separate PR(https://github.com/langchain-ai/langchain/pull/20746)
- **Unit tests:** New unit-tests added

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-05-15 13:14:52 +00:00
Bagatur
f2f970f93d docs: openai bind tools nit (#21692) 2024-05-15 01:20:53 +00:00
Erick Friis
5fa5a73dc0 docs: disable contextual search (#21691) 2024-05-14 16:59:11 -07:00
Erick Friis
3ee0747382 infra: remove prints from notebook build (#21688) 2024-05-14 16:27:56 -07:00
Erick Friis
024c11ff9c docs: v0.2 search index (#21619) 2024-05-14 15:37:42 -07:00
Bagatur
241a6e43a5 docs: update structured how to (#21679) 2024-05-14 22:19:51 +00:00
Jib
f369495fa0 mongodb: [performance] Increase DEFAULT_INSERT_BATCH_SIZE to 100,000 and introduce sizing constraints (#19608) 2024-05-14 22:11:26 +00:00
Eugene Yurtsev
e69a9bedf8 core[patch]: Update mypy config (#21684)
Update mypy config to ignore checking deps from numpy and pytest (which are optional in langsmith sdk)
2024-05-14 17:29:07 -04:00
Erick Friis
9973547aef mongodb: release 0.1.4 (#21678) 2024-05-14 11:54:23 -07:00
Jib
a97473c846 mongodb[patch]: Make ObjectId JSON-serializable on generation (#21394) 2024-05-14 11:52:29 -07:00
ccurme
12b599c47f docs: add how-to on multi-modal tool calling (#21667)
Can move this to a dedicated multi-modal section if desired.
2024-05-14 12:26:25 -04:00
Eugene Yurtsev
5c64c004cc core[patch]: Add unit tests with some streaming scenarios (#21668)
Add unit tests that show differences between sync / async versions when
streaming.

The inner on_chain_chunk event is missing if mixing sync and async
functionality. Likely due to missing tap_output_iter implementation on
the sync variant of `_transform_stream_with_config`
2024-05-14 15:30:57 +00:00
Eugene Yurtsev
2ac4d2960c core[patch]: Add unit test to catch ordering (#21669)
Add unit test to catch ordering issues
2024-05-14 15:25:33 +00:00
ccurme
3390dc2266 docs: style nits (#21666) 2024-05-14 10:18:13 -04:00
ccurme
2463c8060c docs: how-to on adding scores to retriever results (#21626) 2024-05-14 09:41:36 -04:00
Zhao Blake
972d2071c6 core[patch]: Fix typo in VectorStoreExampleSelector doc-string (#21574) 2024-05-14 13:31:37 +00:00
William FH
714cba96a8 [docs] Update langgraph migration guide (#21644)
- add links to references where appropriate
- use the create_react_agent
- Fix the timeout recommendation
2024-05-14 06:13:17 +00:00
Erick Friis
5144c94603 docs: add 0.2 search notice (#21653) 2024-05-14 04:00:18 +00:00
Erick Friis
2a984e8e3f docs: huggingface package (#21645) 2024-05-14 03:17:40 +00:00
Anush
cd1879f5e7 docs: Qdrant partner package reference (#21649)
## Description:
As the title goes.
2024-05-13 19:51:57 -07:00
Erick Friis
c77d2f2b06 multiple: core 0.2 nonbreaking dep, check_diff community->langchain dep (#21646)
0.2 is not a breaking release for core (but it is for langchain and
community)

To keep the core+langchain+community packages in sync at 0.2, we will
relax deps throughout the ecosystem to tolerate `langchain-core` 0.2
2024-05-13 19:50:36 -07:00
Anush
edd68e4ad4 qdrant: init package (#21146)
## Description

This PR introduces the new `langchain-qdrant` partner package, intending
to deprecate the community package.

## Changes

- Moved the Qdrant vector store implementation `/libs/partners/qdrant`
with integration tests.
- The conditional imports of the client library are now regular with
minor implementation improvements.
- Added a deprecation warning to
`langchain_community.vectorstores.qdrant.Qdrant`.
- Replaced references/imports from `langchain_community` with either
`langchain_core` or by moving the definitions to the `langchain_qdrant`
package itself.
- Updated the Qdrant vector store documentation to reflect the changes.

## Testing
- `QDRANT_URL` and
[`QDRANT_API_KEY`](583e36bf6b)
env values need to be set to [run integration
tests](d608c93d1f)
in the [cloud](https://cloud.qdrant.tech).
- If a Qdrant instance is running at `http://localhost:6333`, the
integration tests will use it too.
- By default, tests use an
[`in-memory`](https://github.com/qdrant/qdrant-client?tab=readme-ov-file#local-mode)
instance(Not comprehensive).

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-05-13 18:20:03 -07:00
Erick Friis
fe8c9d621a docs: ignore nb echo:false blocks (#21624)
not working currently
2024-05-13 17:18:26 -07:00
Prashanth Rao
63c3a0e56c [community][graph]: Update KuzuQAChain and docs (#21218)
This PR makes some small updates for `KuzuQAChain` for graph QA.

- Updated Cypher generation prompt (we now support `WHERE EXISTS`) and
generalize it more
- Support different LLMs for Cypher generation and QA
- Update docs and examples
2024-05-13 17:17:14 -07:00
Bagatur
752b1e85f8 docs: gh feedback link (#21606)
Co-authored-by: bracesproul <braceasproul@gmail.com>
2024-05-14 00:11:37 +00:00
Bagatur
506df439eb docs: how to index nits (#21623) 2024-05-13 23:52:50 +00:00
Bagatur
b514a479c0 docs: standardize capitalization (#21641) 2024-05-13 16:25:51 -07:00
Bagatur
89aae3e043 docs: add Techniques to Concepts (#21636)
- Adds Techniques section
- Moves function calling, retrieval types to Techniques
- Removes Installation section (not conceptual)
- Reorders a few things (chat models before llms, package descriptions
before diagram)
- Add text splitter types to Techniques
2024-05-13 16:06:16 -07:00
Tomaz Bratanic
89ff6a3d3b Add sentiment and confidence levels to diffbotgraphtransformer (#21590)
Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-13 23:00:52 +00:00
Bagatur
526ba235f3 docs: fix prereq links (#21630) 2024-05-13 15:40:53 -07:00
Erick Friis
0541e06e21 infra: 0.2 docs 404 page (#21634) 2024-05-13 22:11:28 +00:00
Erick Friis
e861b5bcb7 infra: fix api ref link generation (#21631) 2024-05-13 14:52:26 -07:00
Erick Friis
9b51ca08bc huggingface: fix community dep checking (#21628) 2024-05-13 21:52:18 +00:00
Erick Friis
91a2ea5cd6 chroma, mongodb: fix docstrings (#21629) 2024-05-13 21:27:43 +00:00
Jofthomas
afd85b60fc huggingface: init package (#21097)
First Pr for the langchain_huggingface partner Package

- Moved some of the hugging face related class from `community` to the
new `partner package`

Still needed :
- Documentation
- Tests
- Support for the new apply_chat_template in `ChatHuggingFace`
- Confirm choice of class to support for embeddings witht he
sentence-transformer team.

cc : @efriis

---------

Co-authored-by: Cyril Kondratenko <kkn1993@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-13 20:53:15 +00:00
Tomaz Bratanic
9fce03e7db community[patch]: Fix neo4j enhanced schema (#21582) 2024-05-13 15:26:06 -04:00
Christophe Bornet
66a4da8ad0 community[patch]: Improve Cassandra VectorStore docsctrings (#21620) 2024-05-13 15:24:26 -04:00
adreo00
40aff1eacc core[major]: AsyncCallbackManagerForToolRun no longer casts return object to string (#20374)
- **Description:** Stops `AsyncCallbackManagerForToolRun` from
converting the output to str
- **Issue:** #20372
- **Dependencies:** None
2024-05-13 15:09:12 -04:00
Eugene Yurtsev
25fbe356b4 community[patch]: upgrade to recent version of mypy (#21616)
This PR upgrades community to a recent version of mypy. It inserts type:
ignore on all existing failures.
2024-05-13 14:55:07 -04:00
Eugene Yurtsev
b923951062 langchain[patch]: CI add lint rule for community imports (#21618)
Add a rule to check for imports from community in global scope
2024-05-13 14:51:25 -04:00
Jorge Piedrahita Ortiz
4378fbbef0 community[patch]: Fix typos in Sambanova integration doc-strings (#21617)
- **Description:** Sambanova integration docstrings updated, bad
formated

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-05-13 18:35:16 +00:00
Erick Friis
0f5bf94f9f infra: remove ai21 docs scan features (#21614)
ai21 depends on ai21-tokenizer which depends on too restrictive/old
version of `tokenizers`
2024-05-13 18:05:53 +00:00
ccurme
fe08421207 docs: add hybrid retrieval how-to guide (#21613)
Updating v0.2 docs with
https://github.com/langchain-ai/langchain/pull/21245
2024-05-13 14:03:55 -04:00
Christophe Bornet
bcf53f93e1 [community]: Add missing docstring param to CassandraLoader (#21611) 2024-05-13 16:03:18 +00:00
Christophe Bornet
e6fa4547b1 community[minor]: Add alazy_load to AsyncHtmlLoader (#21536)
Also fixes a bug that `_scrape` was called and was doing a second HTTP
request synchronously.

**Twitter handle:** cbornet_
2024-05-13 12:01:03 -04:00
Leonid Ganeline
4c48732f94 docs: providers updates 1 (#20256)
- Proviers pages: added missed integrations; fixed format
- `mistralai` converted from notebook to .mdx format
2024-05-13 11:54:51 -04:00
ccurme
15cb1133e7 docs: fix path for state_of_the_union sample file (#21609) 2024-05-13 11:46:02 -04:00
Bagatur
83a8fdcfd1 infra: fix local doc make command (#21608) 2024-05-13 08:30:30 -07:00
Eugene Yurtsev
4dc625057e README: Update downloads to show downloads of langchain-core (#21387)
Update downloads to keep track of langchain-core
2024-05-13 11:26:50 -04:00
Wang Guan
b53548dcda langchain[minor]: allow CacheBackedEmbeddings to cache queries (#20073)
Add optional caching of queries to cache backed embeddings
2024-05-13 15:18:04 +00:00
Guangdong Liu
a156aace2b core[patch]:Fix Incorrect listeners parameters for Runnable.with_listeners() and .map() (#20661)
- **Issue:** fix #20509
-  @baskaryan, @eyurtsev


![image](https://github.com/langchain-ai/langchain/assets/48236177/f799a976-b983-4d8b-b373-64392e1fd6c6)
2024-05-13 11:16:17 -04:00
ccurme
b0f5a47f25 docs: update some retrievers how-to guides (#21607) 2024-05-13 11:03:33 -04:00
junkeon
480c02bf55 upstage[minor]: add merge_and_split function for document loader (#21603)
- Introduce the `merge_and_split` function in the
`UpstageLayoutAnalysisLoader`.
- The `merge_and_split` function takes a list of documents and a
splitter as inputs.
- This function merges all documents and then divides them using the
`split_documents` method, which is a proprietary function of the
splitter.
- If the provided splitter is `None` (which is the default setting), the
function will simply merge the documents without splitting them.
2024-05-13 10:55:19 -04:00
Leonid Ganeline
500569da48 community[patch]: vectorstores import update (#21169)
Issue: we have several helper functions to import third-party libraries
like lancedb.import_lancedb in
[community.vectorstores](https://api.python.langchain.com/en/latest/vectorstores/langchain_community.vectorstores.lancedb.import_lancedb.html#langchain_community.vectorstores.lancedb.import_lancedb).
And we have core.utils.utils.guard_import that works exactly for this
purpose.
The import_<package> functions work inconsistently and rather be private
functions.
Change: replaced these functions with the guard_import function.

Related to #21133
2024-05-13 10:45:31 -04:00
ccurme
3003363605 langchain, community: remove cap on sqlalchemy and bump duckdb (#21509) 2024-05-13 10:16:09 -04:00
ccurme
01a3228d8e standard tests: add test for few-shot examples (#21019) 2024-05-13 10:06:12 -04:00
David Duong
db22fcb58b docs: style fixes for api reference docs (#21602)
- Make sure the left nav bar is horizontally scrollable 
- Make sure the navigation dropdown is vertically scrollable and height
capped at 80% of viewport height

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-13 06:49:50 -07:00
Chuyuan Qu
af875cff57 prompty: adding Microsoft langchain_prompty package (#21346)
Co-authored-by: Micky Liu <wayliu@microsoft.com>
Co-authored-by: wayliums <wayliums@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-11 04:03:44 +00:00
Erick Friis
56c6b5868b infra: run codespell on v0.1 prs (#21545) 2024-05-10 12:51:42 -07:00
Matt Florence
d3ca2cc8c3 langchain: Fix broken OpenAIModerationChain and implement async (#18537)
Thank you for contributing to LangChain!

## PR title
lancghain[patch]: fix `OpenAIModerationChain` and implement async

## PR message
Description: fix `OpenAIModerationChain` and implement async

Issues: 
- https://github.com/langchain-ai/langchain/issues/18533 
- https://github.com/langchain-ai/langchain/issues/13685

Dependencies: none
Twitter handle: mattflo


## Add tests and docs
 
Existing documentation is broken:
https://python.langchain.com/docs/guides/safety/moderation


- [ x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Emilia Katari <emilia@outpace.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-05-10 19:04:13 +00:00
ccurme
4170e72a42 openai: fix loads unit test (#21542)
following changes to tests in core here:
https://github.com/langchain-ai/langchain/pull/21342/files
2024-05-10 18:46:34 +00:00
ccurme
d3ff9c5d6a infra: turn off fail-fast for standard tests (#21541) 2024-05-10 18:28:57 +00:00
Erick Friis
e8efe8384d docs: announcement bar dark mode 0.2 (#21540) 2024-05-10 10:13:02 -07:00
Erick Friis
64c47224a0 docs: baseUrl for ganalytics, throw on broken links (#21455) 2024-05-10 13:49:59 +00:00
Usama Jamil
913792f5e6 docs: myscale code typo (#21522)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-10 13:33:22 +00:00
Sevin F. Varoglu
85cbc55f86 docs: update OctoAI LLM doc (#21528)
This PR updates OctoAI doc to remove warnings when running the example
code.
2024-05-10 09:31:16 -04:00
Daniel Glogowski
70a79f45d7 docs: update nvidia nbs (#21498) 2024-05-10 04:38:35 -04:00
Eugene Yurtsev
39e9b644b9 docs: Add langchain over time (#21434)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-10 00:34:35 +00:00
Erick Friis
3db85cbb5b community: deps (#21508) 2024-05-09 15:12:34 -07:00
ccurme
9c2828aaa8 docs: add local LLMs page to v0.2 docs (#21493)
Adding this page from v0.1 docs:
https://python.langchain.com/v0.1/docs/guides/development/local_llms/
2024-05-09 17:57:56 -04:00
Erick Friis
8580e350be cli: release 0.0.22 (#21507) 2024-05-09 21:45:20 +00:00
Anthony Chu
c735849e76 azure-dynamic-sessions: add Python REPL tool (#21264)
Adds a Python REPL that executes code in a code interpreter session
using Azure Container Apps dynamic sessions.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-09 21:39:04 +00:00
Erick Friis
02701c277f langchain: core min version (#21506) 2024-05-09 13:45:44 -07:00
ccurme
81ae184cc9 docs: add response metadata page to v0.2 docs (#21489)
Adding this page from v0.1 docs:
https://python.langchain.com/v0.1/docs/modules/model_io/chat/response_metadata/
2024-05-09 16:17:04 -04:00
Erick Friis
13b01104c9 langchain: drop sqlalchemy max, release 0.2.0rc2 (#21504) 2024-05-09 13:12:38 -07:00
ccurme
375f447e58 community: fix builds with min dependencies (#21495) 2024-05-09 13:01:44 -07:00
Erick Friis
2be4b1b2c9 Revert "docs: redirect base slug" (#21499)
Reverts langchain-ai/langchain#21457
2024-05-09 12:20:16 -07:00
Erick Friis
d1fc841b1a docs: redirect base slug (#21457) 2024-05-09 10:52:36 -07:00
Trayan Azarov
ba7d53689c community: Chroma Adding create_collection_if_not_exists flag to Chroma constructor (#21420)
- **Description:** Adds the ability to either `get_or_create` or simply
`get_collection`. This is useful when dealing with read-only Chroma
instances where users are constraint to using `get_collection`. Targeted
at Http/CloudClients mostly.
- **Issue:** chroma-core/chroma#2163
- **Dependencies:** N/A
- **Twitter handle:** `@t_azarov`




| Collection Exists | create_collection_if_not_exists | Outcome | test |

|-------------------|---------------------------------|----------------------------------------------------------------|----------------------------------------------------------|
| True | False | No errors, collection state unchanged |
`test_create_collection_if_not_exist_false_existing` |
| True | True | No errors, collection state unchanged |
`test_create_collection_if_not_exist_true_existing` |
| False | False | Error, `get_collection()` fails |
`test_create_collection_if_not_exist_false_non_existing` |
| False | True | No errors, `get_or_create_collection()` creates the
collection | `test_create_collection_if_not_exist_true_non_existing` |
2024-05-09 11:45:10 -04:00
ccurme
3bb9bec314 bedrock: add unit test for retriever (#21485)
This was implemented in
https://github.com/langchain-ai/langchain/pull/21349 but dropped before
merge.
2024-05-09 11:37:03 -04:00
Renu Rozera
4035a1d234 Add source metadata to bedrock retriever response (#21349)
Thank you for contributing to LangChain!

- [X] **PR title**: "community: Add source metadata to bedrock retriever
response"

- [X] **PR message**: 
- **Description:** Bedrock retrieve API returns extra metadata in the
response which is currently not returned in the retriever response
- **Issue:** The change adds the metadata from bedrock retrieve API
response to the bedrock retriever in a backward compatible way. Renamed
metadata to sourceMetadata as metadata term is being used in the
Document already. This is in sync with what we are doing in llama-index
as well.
    - **Dependencies:** No


- [X] **Add tests and docs**:
  1. Added unit tests
  2. Notebook already exists and does not need any change
3. Response from end to end testing, just to ensure backward
compatibility: `[Document(page_content='Exoplanets.',
metadata={'location': {'s3Location': {'uri':
's3://bucket/file_name.txt'}, 'type': 'S3'}, 'score': 0.46886647,
'source_metadata': {'x-amz-bedrock-kb-source-uri':
's3://bucket/file_name.txt', 'tag': 'space', 'team': 'Nasa', 'year':
1946.0}})]`


- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
2024-05-09 11:06:22 -04:00
ccurme
9fa17bfabe docs; fix links in v0.2.0 (#21483) 2024-05-09 11:05:17 -04:00
Erick Friis
f178c67ad0 community: release 0.2.0rc1, bump deps (#21470) 2024-05-08 23:32:44 -07:00
William FH
b28be5d407 Pass through Run ID Explicitly (#21469) 2024-05-08 22:20:51 -07:00
Erick Friis
83eecd54fe experimental: 0.2 relax (#21468) 2024-05-08 21:39:42 -07:00
roiperlman
9992beaff9 community: Add arguments to whisper parser (#20378)
**Description:** Added a few additional arguments to the whisper parser,
which can be consumed by the underlying API.
The prompt is especially important to fine-tune transcriptions.

---------

Co-authored-by: Roi Perlman <roi@fivesigmalabs.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-08 17:53:13 -07:00
Erick Friis
5542eacad8 docs: sidebar autogen hidden support (#21454) 2024-05-09 00:23:52 +00:00
Yash
cb31c3611f Ndb enterprise (#21233)
Description: Adds NeuralDBClientVectorStore to the langchain, which is
our enterprise client.

---------

Co-authored-by: kartikTAI <129414343+kartikTAI@users.noreply.github.com>
Co-authored-by: Kartik Sarangmath <kartik@thirdai.com>
2024-05-08 16:30:58 -07:00
Erick Friis
74044e44a5 docs: useBaseUrl on svg paths (#21446) 2024-05-08 21:55:42 +00:00
Oguz Vuruskaner
5b35f077f9 [community][fix](DeepInfraEmbeddings): Implement chunking for large batches (#21189)
**Description:**
This PR introduces chunking logic to the `DeepInfraEmbeddings` class to
handle large batch sizes without exceeding maximum batch size of the
backend. This enhancement ensures that embedding generation processes
large batches by breaking them down into smaller, manageable chunks,
each conforming to the maximum batch size limit.

**Issue:**
Fixes #21189

**Dependencies:**
No new dependencies introduced.
2024-05-08 14:45:42 -07:00
Sokolov Fedor
f4ddf64faa community: Add MarkdownifyTransformer to langchain_community.document_transformers (#21247)
- Added new document_transformer: MarkdonifyTransformer, that uses
`markdonify` package with customizable options to convert HTML to
Markdown. It's similar to Html2TextTransformer, but has more flexible
options and also I've noticed that sometimes MarkdownifyTransformer
performs better than html2text one, so that's why I use markdownify on
my project.
- Added docs and tests

- Usage:
```python
from langchain_community.document_transformers import MarkdownifyTransformer

markdownify = MarkdownifyTransformer()
docs_transform = markdownify.transform_documents(docs)
```

- Example of better performance on simple task, that I've noticed:
```
<html>
<head><title>Reports on product movement</title></head>
<body>
<p data-block-key="2wst7">The reports on product movement will be useful for forming supplier orders and controlling outcomes.</p>
</body>
```
**Html2TextTransformer**: 
```python
[Document(page_content='The reports on product movement will be useful for forming supplier orders and\ncontrolling outcomes.\n\n')]
# Here we can see 'and\ncontrolling', which has extra '\n' in it
```
**MarkdownifyTranformer**:
```python
[Document(page_content='Reports on product movement\n\nThe reports on product movement will be useful for forming supplier orders and controlling outcomes.')]
```

---------

Co-authored-by: Sokolov Fedor <f.sokolov@sokolov-macbook.bbrouter>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Sokolov Fedor <f.sokolov@sokolov-macbook.local>
Co-authored-by: Sokolov Fedor <f.sokolov@192.168.1.6>
2024-05-08 14:45:13 -07:00
Alex JW
d3ce6aad2e community: Instantiate GPT4AllEmbeddings with parameters (#21238)
### GPT4AllEmbeddings parameters
---

**Description:** 
As of right now the **Embed4All** class inside _GPT4AllEmbeddings_ is
instantiated as it's default which leaves no room to customize the
chosen model and it's behavior. Thus:

- GPT4AllEmbeddings can now be instantiated with custom parameters like
a different model that shall be used.

---------

Co-authored-by: AlexJauchWalser <alexander.jauch-walser@knime.com>
2024-05-08 14:44:47 -07:00
Philippe PRADOS
7be68228da community[patch]: Make sql record manager fully compatible with async (#20735)
The `_amake_session()` method does not allow modifying the
`self.session_factory` with
anything other than `async_sessionmaker`. This prohibits advanced uses
of `index()`.

In a RAG architecture, it is necessary to import document chunks.
To keep track of the links between chunks and documents, we can use the
`index()` API.
This API proposes to use an SQL-type record manager.

In a classic use case, using `SQLRecordManager` and a vector database,
it is impossible
to guarantee the consistency of the import. Indeed, if a crash occurs
during the import
(problem with the network, ...)
there is an inconsistency between the SQL database and the vector
database.

With the
[PR](https://github.com/langchain-ai/langchain-postgres/pull/32) we are
proposing for `langchain-postgres`,
it is now possible to guarantee the consistency of the import of chunks
into
a vector database.  It's possible only if the outer session is built
with the connection.

```python
def main():
    db_url = "postgresql+psycopg://postgres:password_postgres@localhost:5432/"
    engine = create_engine(db_url, echo=True)
    embeddings = FakeEmbeddings()
    pgvector:VectorStore = PGVector(
        embeddings=embeddings,
        connection=engine,
    )

    record_manager = SQLRecordManager(
        namespace="namespace",
        engine=engine,
    )
    record_manager.create_schema()

    with engine.connect() as connection:
        session_maker = scoped_session(sessionmaker(bind=connection))
        # NOTE: Update session_factories
        record_manager.session_factory = session_maker
        pgvector.session_maker = session_maker
        with connection.begin():
            loader = CSVLoader(
                    "data/faq/faq.csv",
                    source_column="source",
                    autodetect_encoding=True,
                )
            result = index(
                source_id_key="source",
                docs_source=loader.load()[:1],
                cleanup="incremental",
                vector_store=pgvector,
                record_manager=record_manager,
            )
            print(result)
```
The same thing is possible asynchronously, but a bug in
`sql_record_manager.py`
in `_amake_session()` must first be fixed.

```python
    async def _amake_session(self) -> AsyncGenerator[AsyncSession, None]:
        """Create a session and close it after use."""

        # FIXME: REMOVE if not isinstance(self.session_factory, async_sessionmaker):~~
        if not isinstance(self.engine, AsyncEngine):
            raise AssertionError("This method is not supported for sync engines.")

        async with self.session_factory() as session:
            yield session
``` 

Then, it is possible to do the same thing asynchronously:

```python
async def main():
    db_url = "postgresql+psycopg://postgres:password_postgres@localhost:5432/"
    engine = create_async_engine(db_url, echo=True)
    embeddings = FakeEmbeddings()
    pgvector:VectorStore = PGVector(
        embeddings=embeddings,
        connection=engine,
    )
    record_manager = SQLRecordManager(
        namespace="namespace",
        engine=engine,
        async_mode=True,
    )
    await record_manager.acreate_schema()

    async with engine.connect() as connection:
        session_maker = async_scoped_session(
            async_sessionmaker(bind=connection),
            scopefunc=current_task)
        record_manager.session_factory = session_maker
        pgvector.session_maker = session_maker
        async with connection.begin():
            loader = CSVLoader(
                "data/faq/faq.csv",
                source_column="source",
                autodetect_encoding=True,
            )
            result = await aindex(
                source_id_key="source",
                docs_source=loader.load()[:1],
                cleanup="incremental",
                vector_store=pgvector,
                record_manager=record_manager,
            )
            print(result)


asyncio.run(main())
```

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Sean <sean@upstage.ai>
Co-authored-by: JuHyung-Son <sonju0427@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: YISH <mokeyish@hotmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Jason_Chen <820542443@qq.com>
Co-authored-by: Joan Fontanals <joan.fontanals.martinez@jina.ai>
Co-authored-by: Pavlo Paliychuk <pavlo.paliychuk.ca@gmail.com>
Co-authored-by: fzowl <160063452+fzowl@users.noreply.github.com>
Co-authored-by: samanhappy <samanhappy@gmail.com>
Co-authored-by: Lei Zhang <zhanglei@apache.org>
Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
Co-authored-by: merdan <48309329+merdan-9@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Andres Algaba <andresalgaba@gmail.com>
Co-authored-by: davidefantiniIntel <115252273+davidefantiniIntel@users.noreply.github.com>
Co-authored-by: Jingpan Xiong <71321890+klaus-xiong@users.noreply.github.com>
Co-authored-by: kaka <kaka@zbyte-inc.cloud>
Co-authored-by: jingsi <jingsi@leadincloud.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Rahul Triptahi <rahul.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Shengsheng Huang <shannie.huang@gmail.com>
Co-authored-by: Michael Schock <mjschock@users.noreply.github.com>
Co-authored-by: Anish Chakraborty <anish749@users.noreply.github.com>
Co-authored-by: am-kinetica <85610855+am-kinetica@users.noreply.github.com>
Co-authored-by: Dristy Srivastava <58721149+dristysrivastava@users.noreply.github.com>
Co-authored-by: Matt <matthew.gotteiner@microsoft.com>
Co-authored-by: William FH <13333726+hinthornw@users.noreply.github.com>
2024-05-08 17:31:11 -04:00
Andreas Motl
17e42bbd18 community[patch]: pgvector: Slight refactoring to make code a bit more reusable (#16243)
- **Description:** Improve [pgvector vector store
adapter](https://github.com/langchain-ai/langchain/blob/v0.1.1/libs/community/langchain_community/vectorstores/pgvector.py)
to make it reusable by adapters deriving from that.
  - **Issue:** NA
  - **Dependencies:** NA
  - **References:** https://github.com/crate-workbench/langchain/pull/1
  - **Addressed to:** @eyurtsev, @cbornet


Hi from the CrateDB team,

first of all, thanks a stack for conceiving and maintaining LangChain.
We are currently [preparing a
patch](https://github.com/crate-workbench/langchain/pull/1) for adding
[CrateDB](https://github.com/crate/crate) to the list of community
adapters.

Because CrateDB aims to be compatible with PostgreSQL to some degree,
the vector store subsystem in LangChain derives functionality from the
corresponding implementation for pgvector.

Therefore, in order to make the implementation more reusable, we needed
to rename the private methods `__from` and `__query_collection` to the
less private counterparts `_from` and `_query_collection`, so they can
be overwritten, in order to unlock other adapters deriving from
[pgvector](https://github.com/langchain-ai/langchain/blob/v0.1.1/libs/community/langchain_community/vectorstores/pgvector.py).

With kind regards,
Andreas.
2024-05-08 17:21:30 -04:00
Mehrdad Shokri
f103927b88 bugfix(community): fix Playwright import paths. (#21395)
- **Description:** Fix import class name exporeted from
'playwright.async_api' and 'playwright.sync_api' to match the correct
name in playwright tool. Change import from inline guard_import to
helper function that calls guard_import to make code more readable in
gmail tool. Upgrade playwright version to 1.43.0
- **Issue:** #21354
- **Dependencies:** upgrade playwright version(this is not required for
the bugfix itself, just trying to keep dependencies fresh. I can remove
the playwright version upgrade if you want.)
2024-05-08 14:20:25 -07:00
Shailendra Mishra
aa966b6161 Replaced bind variable in SQL with formatted string for compatibility with sql syntax. (#21439)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-08 13:51:30 -07:00
Eugene Yurtsev
f92006de3c multiple: langchain 0.2 in master (#21191)
0.2rc 

migrations

- [x] Move memory
- [x] Move remaining retrievers
- [x] graph_qa chains
- [x] some dependency from evaluation code potentially on math utils
- [x] Move openapi chain from `langchain.chains.api.openapi` to
`langchain_community.chains.openapi`
- [x] Migrate `langchain.chains.ernie_functions` to
`langchain_community.chains.ernie_functions`
- [x] migrate `langchain/chains/llm_requests.py` to
`langchain_community.chains.llm_requests`
- [x] Moving `langchain_community.cross_enoders.base:BaseCrossEncoder`
->
`langchain_community.retrievers.document_compressors.cross_encoder:BaseCrossEncoder`
(namespace not ideal, but it needs to be moved to `langchain` to avoid
circular deps)
- [x] unit tests langchain -- add pytest.mark.community to some unit
tests that will stay in langchain
- [x] unit tests community -- move unit tests that depend on community
to community
- [x] mv integration tests that depend on community to community
- [x] mypy checks

Other todo

- [x] Make deprecation warnings not noisy (need to use warn deprecated
and check that things are implemented properly)
- [x] Update deprecation messages with timeline for code removal (likely
we actually won't be removing things until 0.4 release) -- will give
people more time to transition their code.
- [ ] Add information to deprecation warning to show users how to
migrate their code base using langchain-cli
- [ ] Remove any unnecessary requirements in langchain (e.g., is
SQLALchemy required?)

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-08 16:46:52 -04:00
ccurme
6b392d6d12 robocorp: release 0.0.6 (#21441) 2024-05-08 16:16:24 -04:00
Erick Friis
21d14549a9 docs: v0.2 docs in master (#21438)
current python.langchain.com is building from branch `v0.1`. Iterate on
v0.2 docs here.

---------

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: jacoblee93 <jacoblee93@gmail.com>
Co-authored-by: Leonid Ganeline <leo.gan.57@gmail.com>
Co-authored-by: Leonid Kuligin <lkuligin@yandex.ru>
Co-authored-by: Averi Kitsch <akitsch@google.com>
Co-authored-by: Nuno Campos <nuno@langchain.dev>
Co-authored-by: Nuno Campos <nuno@boringbits.io>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Martín Gotelli Ferenaz <martingotelliferenaz@gmail.com>
Co-authored-by: Fayfox <admin@fayfox.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Dawson Bauer <105886620+djbauer2@users.noreply.github.com>
Co-authored-by: Ravindu Somawansa <ravindu.somawansa@gmail.com>
Co-authored-by: Dhruv Chawla <43818888+Dominastorm@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: WeichenXu <weichen.xu@databricks.com>
Co-authored-by: Benito Geordie <89472452+benitoThree@users.noreply.github.com>
Co-authored-by: kartikTAI <129414343+kartikTAI@users.noreply.github.com>
Co-authored-by: Kartik Sarangmath <kartik@thirdai.com>
Co-authored-by: Sevin F. Varoglu <sfvaroglu@octoml.ai>
Co-authored-by: MacanPN <martin.triska@gmail.com>
Co-authored-by: Prashanth Rao <35005448+prrao87@users.noreply.github.com>
Co-authored-by: Hyeongchan Kim <kozistr@gmail.com>
Co-authored-by: sdan <git@sdan.io>
Co-authored-by: Guangdong Liu <liugddx@gmail.com>
Co-authored-by: Rahul Triptahi <rahul.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: pjb157 <84070455+pjb157@users.noreply.github.com>
Co-authored-by: Eun Hye Kim <ehkim1440@gmail.com>
Co-authored-by: kaijietti <43436010+kaijietti@users.noreply.github.com>
Co-authored-by: Pengcheng Liu <pcliu.fd@gmail.com>
Co-authored-by: Tomer Cagan <tomer@tomercagan.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
2024-05-08 12:29:59 -07:00
Tommi Holmgren
ee35b9ba56 langchain-robocorp: remove toolkit return content max length (#21436)
Robocorp (action server) toolkit had a limitation that the content
length returned by the tool was always cut to max 5000 chars. This was
from the time when context windows were much more limited.

This PR removes the limitation. Whatever the underlying tool provides
gets sent back to the agent.

As the robocorp toolkit no longer restricts the content, the implication
is that either the Action (tool) developer or the agent developer needs
to be aware of potentially oversized tool responses. Our point of view
is this should be the agent developer's responsibility, them being in
control of the use case and aware of the context window the LLM has.
2024-05-08 15:05:55 -04:00
JuHyung Son
710e57d779 upstage: deprecate UPSTAGE_DOCUMENT_AI_API_KEY (#21363)
Description: We are merging UPSTAGE_DOCUMENT_AI_API_KEY and
UPSTAGE_API_KEY into one, and only UPSTAGE_API_KEY will be used going
forward. And we changed the base class of ChatUpstage to BaseChatOpenAI.

---------

Co-authored-by: Sean <chosh0615@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-08 18:02:26 +00:00
Erick Friis
6a295d1ec0 upstage: release 0.1.4 (#21432) 2024-05-08 17:57:40 +00:00
Mateusz Szewczyk
7926cc1929 ibm: Fix llm and embeddings "verify" attribute default value (#21429)
Thank you for contributing to LangChain!

- [x] **PR title**: "langchain-ibm: Fix llm and embeddings 'verify'
attribute default value"


- [x] **PR message**: 
    - **Description:** fix default value of "verify" attribute
    - **Dependencies:** `ibm_watsonx_ai`


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-08 17:23:14 +00:00
Kevin Zhang
0715545378 docs: fix typo in text (#21393)
**Description:** The previous text had an unclosed parenthesis, this fix
adds the closing parenthesis
2024-05-08 15:58:15 +00:00
Dobiichi-Origami
5b00885b49 community: add bind_tools and with_structured_output support to QianfanChatEndpoint (#21412)
…Endpoint`

Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** add `bind_tools` and `with_structured_output` support
to `QianfanChatEndpoint`


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-05-08 11:35:10 -04:00
Silas Xu
aafaf3e193 The predict_and_parse is deprecated, instead pass an output parser directly to LLMChain. (#20130)
The `predict_and_parse` method is deprecated, instead pass an output
parser directly to LLMChain.

- [x] **PR title**: "langchain: update chain_extract.py"


![image](https://github.com/langchain-ai/langchain/assets/40889019/e950d79f-5a0f-4086-86e9-89f627990fe5)

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-08 09:32:17 -04:00
ccurme
3c31bd0ed0 langchain: update use of predict_and_parse in LLMChainFilter (#21389)
Following https://github.com/langchain-ai/langchain/pull/20130

Removes deprecation warnings in docs here:
https://python.langchain.com/docs/modules/data_connection/retrievers/contextual_compression/

Tested using the same docs notebook + existing integration test.
2024-05-08 09:31:33 -04:00
Tomaz Bratanic
dd70f2f473 Update graph docs (#21414)
Update the deprecated docs and added node properties to graph
construction
2024-05-08 09:05:39 -04:00
Erick Friis
bbdf0f8801 experimental[patch]: core and langchain dep (#21402) 2024-05-07 21:39:34 -07:00
Erick Friis
e4aca0d052 experimental[patch]: release 0.0.58 (#21397) 2024-05-08 03:52:44 +00:00
Erick Friis
893f06b5de infra: rewrite ipynb links to md (#21392) 2024-05-07 23:16:52 +00:00
Hassan El Mghari
225ceedcb6 docs: Add together docs in chat models & update provider docs (#21391)
- Added Together docs in chat models section
- Update Together provider docs to match the LLM & chat models sections

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-07 22:40:57 +00:00
Heidi Steen
af97d58c9e docs: update docs/integrations/retrievers/azure_ai_search.ipynb (#21160)
This is a doc update. It fixes up formatting and product name
references. The example code is updated to use a local built-in text
file.

@mmhangami Please take a look

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-05-07 22:33:46 +00:00
snova-jamesv
ca753e7c15 community: updated performance limitation wording in sambanova.ipynb (#21390)
- **Description:** updated performance limitation wording in
sambanova.ipynb
    - **Issue:** NA
    - **Dependencies:** NA
    - **Twitter handle:** NA
2024-05-07 22:21:46 +00:00
Leonid Ganeline
791d59a2c8 community: callbacks guard_imports (#21173)
Issue: we have several helper functions to import third-party libraries
like import_uptrain in
[community.callbacks](https://api.python.langchain.com/en/latest/callbacks/langchain_community.callbacks.uptrain_callback.import_uptrain.html#langchain_community.callbacks.uptrain_callback.import_uptrain).
And we have core.utils.utils.guard_import that works exactly for this
purpose.
The import_<package> functions work inconsistently and rather be private
functions.
Change: replaced these functions with the guard_import function.

Related to #21133
2024-05-07 15:04:54 -07:00
Hassan El Mghari
416549bed2 docs: Updated Together integration docs (#21388)
**Description:** Updated the together integration docs by leading with
the streaming example, explicitly specifying a model to show users how
to do that, and updating the sections to more closely match other
integrations.
2024-05-07 21:51:42 +00:00
Rahul Triptahi
7994cba18d [Community][Minor]: Fetch loader_source of GoogleDriveLoader in PebbloSafeLoader. (#21314)
Description: This PR includes fix for loader_source to be fetched from
metadata in case of GdriveLoaders.
Documentation: NA
Unit Test: NA

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-05-07 14:45:58 -07:00
Leonid Ganeline
7cbf1c31aa docs: table legend updated (#21351)
Compacted the table column legends. Added links. Similar to #21259
2024-05-07 14:45:04 -07:00
Erick Friis
d5bde4fa91 infra: use nbconvert for docs build (#21135)
todo

- [x] remove quarto build semantics
- [x] remove quarto download/install
- [x] make `uv` not verbose
2024-05-07 12:30:17 -07:00
Nuno Campos
ad0f3c14c2 core: allow mermaid node labels to have any characters (#21385)
- it's only node ids that are limited

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-07 12:16:49 -07:00
Eugene Yurtsev
6a1d61dbf1 community[patch]: Fix in memory vectorstore to take into account ids when adding docs (#21384)
Should respect `ids` if passed
2024-05-07 15:05:16 -04:00
Ikko Eltociear Ashimine
80170da6c5 docs: update cassandra_database.ipynb (#21145)
Enviroment -> Environment
2024-05-07 15:00:24 -04:00
Miroslav
04e2611fea Added additional headers for HuggingFaceInferenceAPIEmbeddings endpoint. (#21282)
Thank you for contributing to LangChain!

- [ ] **HuggingFaceInferenceAPIEmbeddings**: "Additional Headers"
  - Where: langchain, community, embeddings. huggingface.py.
- Community: add additional headers when needed by custom HuggingFace
TEI embedding endpoints. HuggingFaceInferenceAPIEmbeddings"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Adding the `additional_headers` to be passed to
requests library if needed
    - **Dependencies:** none
 

- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. Tested with locally available TEI endpoints with and without
`additional_headers`
  2. Example  Usage
  
```python
embeddings=HuggingFaceInferenceAPIEmbeddings(
                             api_key=MY_CUSTOM_API_KEY,
                             api_url=MY_CUSTOM_TEI_URL,
                             additional_headers={
                                "Content-Type": "application/json"
                               }
)
```

 

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Massimiliano Pronesti <massimiliano.pronesti@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-05-07 14:17:53 -04:00
Ikko Eltociear Ashimine
c34419e200 docs: update quick_start.ipynb (#21358)
initalize -> initialize



- [x] **PR title**: "package: description"
2024-05-07 08:44:48 -07:00
Guangdong Liu
1fe66f5d39 community(patch) fix MoonshotChat moonshot_api_key is invaild for api key (#21361)
Description: close
https://github.com/langchain-ai/langchain/issues/21237
@baskaryan, @eyurtsev
2024-05-07 08:44:30 -07:00
snova-jamesv
c2ed484653 community: add Sambaverse rate limitation info to sambanova.ipynb (#21379)
- **Description:** add Sambaverse rate limitation info to
sambanova.ipynb
    - **Issue:** NA
    - **Dependencies:** NA
2024-05-07 15:42:44 +00:00
Tomaz Bratanic
0bf7596839 Add simple node properties to llm graph transformer (#21369)
Add support for simple node properties in llm graph transformer.

Linter and dynamic pydantic classes aren't friends, hence I added two
ignores
2024-05-07 08:41:09 -07:00
ccurme
080af0ec53 langchain: sync -> async methods in OpenAI assistants (#21378) 2024-05-07 10:25:55 -04:00
Tomaz Bratanic
ad3fd44a7f experimental: Fix llm graph transformer bug (#21362) 2024-05-06 23:59:55 -07:00
Erick Friis
bb81ae5c8c together: fix chat model and embedding classes (#21353) 2024-05-06 18:26:03 -07:00
Hassan El Mghari
d6ef5fe86a together: add chat models, use openai base (#21337)
**Description:** Adding chat completions to the Together AI package,
which is our most popular API. Also staying backwards compatible with
the old API so folks can continue to use the completions API as well.
Also moved the embedding API to use the OpenAI library to standardize it
further.

**Twitter handle:** @nutlope

- [x] **Add tests and docs**: If you're adding a new integration, please
include
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-06 17:47:06 -07:00
Jacob Lee
a2d31307bb Adds confirmation logs after creating a new project (#12618)
@efriis @hwchase17

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-06 23:28:12 +00:00
Erick Friis
0fb93cd740 core: release 0.1.52 (#21350) 2024-05-06 22:20:35 +00:00
Wu Enze
32c61b3ece community[patch]: chat message history mypy fixes #17048 (#20114)
Relates [#17048]
Description : Applied fix to redis and neo4j file.

Error was : `Cannot override writeable attribute with read-only
property`

fix with the same solution of
[[langchain/libs/community/langchain_community/chat_message_histories/elasticsearch.py](d5c412b0a9/libs/community/langchain_community/chat_message_histories/elasticsearch.py (L170-L175))]

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-06 22:17:45 +00:00
nrpd25
95cc8e3fc3 premai[patch]:Standardized model init args (#21308)
[Standardized model init args
#20085](https://github.com/langchain-ai/langchain/issues/20085)
- Enable premai chat model to be initialized with `model_name` as an
alias for `model`, `api_key` as an alias for `premai_api_key`.
- Add initialization test `test_premai_initialization`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-06 18:12:29 -04:00
Nuno Campos
6f17158606 fix: core: Include in json output also fields set outside the constructor (#21342) 2024-05-06 14:37:36 -07:00
Tomaz Bratanic
ac14f171ac Add indexed properties to neo4j enhanced schema (#21335) 2024-05-06 14:28:34 -07:00
scaserini
a6cdf6572f community: add Kendra DocumentRelevanceOverrideConfigurations request parameter (#20695)
- **Description:** add **DocumentRelevanceOverrideConfigurations**
request parameter to Kendra retriever

Co-authored-by: Simone Caserini <simone.caserini@klarna.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-06 14:26:36 -07:00
Nuno Campos
0345bcf4ef Fix failing test for serialization (#21344)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-06 21:19:54 +00:00
Trayan Azarov
93226b1945 community: Updated Chroma version range to include 0.5.0 release (#21224)
- Updated Chroma version range to allow releases in 0.5.x.
- Bumped mypy version as linting was failing
2024-05-06 13:31:40 -07:00
Jorge Piedrahita Ortiz
e65652c3e8 community: add SambaNova embeddings integration (#21227)
- **Description:**  SambaNova hosted embeddings integration
2024-05-06 13:29:59 -07:00
Jorge Piedrahita Ortiz
df1c10260c community: minor changes sambanova integration (#21231)
- **Description:** fix: variable names in root validator not allowing
pass credentials as named parameters in llm instancing, also added
sambanova's sambaverse and sambastudio llms to __init__.py for module
import
2024-05-06 13:28:35 -07:00
Jan Soubusta
d9a61c0fa9 fix: respect table_name argument when calling from_texts (#21252)
valid for from_documents() as well

fixes #21251
2024-05-06 20:28:22 +00:00
Pedro Lima
bebf46c4a2 community: added args_schema to YahooFinanceNewsTool (#21232)
Description: this change adds args_schema (pydantic BaseModel) to
YahooFinanceNewsTool for correct schema formatting on LLM function calls

Issue: currently using YahooFinanceNewsTool with OpenAI function calling
returns the following error "TypeError("YahooFinanceNewsTool._run() got
an unexpected keyword argument '__arg1'")". This happens because the
schema sent to the LLM is "input: "{'__arg1': 'MSFT'}"" while the method
should be called with the "query" parameter.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-06 13:27:54 -07:00
Mark Cusack
060987d755 community[minor]: Add indexing via locality sensitive hashing to the Yellowbrick vector store (#20856)
- **Description:** Add LSH-based indexing to the Yellowbrick vector
store module
- **Twitter handle:** @markcusack

---------

Co-authored-by: markcusack <markcusack@markcusacksmac.lan>
Co-authored-by: markcusack <markcusack@Mark-Cusack-sMac.local>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-05-06 20:18:02 +00:00
Rashmi Pawar
a2fdabdad2 mark NemoEmbeddings as deprecated (#21239)
The NemoEmbeddings is deprecated, instead use
langchain-nvidia-ai-endpoints NVIDIAEmbeddings interface.

cc: @mattf

---------

Co-authored-by: Daniel Glogowski <167348611+dglogo@users.noreply.github.com>
Co-authored-by: andyjessen <62343929+andyjessen@users.noreply.github.com>
Co-authored-by: Chris Germann <88305668+TAAGECH9@users.noreply.github.com>
Co-authored-by: gere <gere@kapo.zh.ch>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-06 19:44:58 +00:00
Erick Friis
9e4b24a2d6 langchain: release 0.1.18 (#21338) 2024-05-06 19:39:46 +00:00
Erick Friis
5c000f8d79 community: release 0.0.37 (#21332) 2024-05-06 12:17:42 -07:00
Leonid Ganeline
8c13e8a79b langchain: qa_chain fix (#21279)
Issue: `load_qa_chain` is placed in the __init__.py file. As a result,
it is not listed in the API Reference docs.
BTW `load_qa_chain` is heavily presented in the doc examples, but is
missed in API Ref.
Change: moved code from init.py into a new file. Related: #21266
2024-05-06 14:45:51 -04:00
Erick Friis
7ecf9996f1 community: Revert "community: langkit dependency" (#21333)
Reverts langchain-ai/langchain#21174

Hey team - going to revert this because it doesn't seem necessary for
testing. We should only be adding optional + extended_testing
dependencies for deps that have extended tests.

otherwise it just increases probability of dependency conflicts in the
community lockfile.
2024-05-06 18:44:41 +00:00
Param Singh
fee91d43b7 baichuan[patch]:standardize chat init args (#21298)
Thank you for contributing to LangChain!

community:baichuan[patch]: standardize init args

updated `baichuan_api_key` so that aliased to `api_key`. Added test that
it continues to set the same underlying attribute. Test checks for
`SecretStr`

updated `temperature` with Pydantic Field, added unit test. 

Related to https://github.com/langchain-ai/langchain/issues/20085
2024-05-06 18:33:57 +00:00
Leonid Ganeline
62559b20b3 docs: chains page format (#21259)
Compacted the table column descriptions.
2024-05-06 11:33:38 -07:00
Christophe Bornet
484a009012 community[minor]: Relax constraints on Cassandra VectorStore constructors (#21209)
If Session and/or keyspace are not provided, they are resolved from
cassio's context. So they are not required.
This change is fully backward compatible.
2024-05-06 14:32:32 -04:00
Daniel Glogowski
27e73ebe57 docs: update nvidia docs v2 (#21288)
More doc updates por favor @baskaryan!
2024-05-06 11:29:02 -07:00
Leonid Ganeline
6feddfae88 community: langkit dependency (#21174)
Issue: the `langkit` package is not presented in the `pyproject.toml`
but it is a requirement for the `WhyLabsCallbackHandler`
Change: added `langkit`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-06 18:09:31 +00:00
Erick Friis
811e9cee8b core: release 0.1.51 (#21328) 2024-05-06 10:40:19 -07:00
Pengcheng Liu
144f2821af docs: add example for loading data from LarkSuite wiki. (#21311)
**Description:** Update LarkSuite loader doc to give an example for
loading data from LarkSuite wiki.
**Issue:** None
**Dependencies:** None
**Twitter handle:** None
2024-05-06 09:56:12 -07:00
Mateusz Szewczyk
682d21c3de ibm: Add support for ibm-watsonx-ai new major version (#21313)
Thank you for contributing to LangChain!

- [x] **PR title**: "langchain-ibm: Add support for ibm-watsonx-ai new
major version"


- [x] **PR message**: 
    - **Description:** Add support for ibm-watsonx-ai new major version
    - **Dependencies:** `ibm_watsonx_ai`


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-06 16:48:26 +00:00
Chris Papademetrious
ee6c922c91 langchain[minor]: enhance LocalFileStore to offer update_atime parameter that updates access times on read (#20951)
**Description:**
The `LocalFileStore` class can be used to create an on-disk
`CacheBackedEmbeddings` cache. The number of files in these embeddings
caches can grow to be quite large over time (hundreds of thousands) as
embeddings are computed for new versions of content, but the embeddings
for old/deprecated content are not removed.

A *least-recently-used* (LRU) cache policy could be applied to the
`LocalFileStore` directory to delete cache entries that have not been
referenced for some time:

```bash
# delete files that have not been accessed in the last 90 days
find embeddings_cache_dir/ -atime 90 -print0 | xargs -0 rm
```

However, most filesystems in enterprise environments disable access time
modification on read to improve performance. As a result, the access
times of these cache entry files are not updated when their values are
read.

To resolve this, this pull request updates the `LocalFileStore`
constructor to offer an `update_atime` parameter that causes access
times to be updated when a cache entry is read.

For example,

```python
file_store = LocalFileStore(temp_dir, update_atime=True)
```

The default is `False`, which retains the original behavior.

**Testing:**
I updated the LocalFileStore unit tests to test the access time update.
2024-05-06 11:52:29 -04:00
Tomaz Bratanic
5b6d1a907d Add the extract types to diffbot graph transformer (#21315)
Before you could only extract triples (diffbot calls it facts) from
diffbot to avoid isolated nodes. However, sometimes isolated nodes can
still be useful like for prefiltering, so we want to allow users to
extract them if they want. Default behaviour is unchanged.
2024-05-06 09:19:52 -04:00
Jagadish Krishnamoorthy
c038991590 docs: Update pandas.ipynb (#21289)
Remove the redundant comment.
2024-05-05 20:22:17 +00:00
aditya thomas
b868c78a12 partners[anthropic]: update unit test for key passed in from the environment (#21290)
**Description:** Update unit test for ChatAnthropic
**Issue:** Test for key passed in from the environment should not have
the key initialized in the constructor
**Dependencies:** None
2024-05-05 16:19:10 -04:00
tanersekmen
d310f9c71e docs:update code structure (#21302)
update the structure of llm_chain variables

Co-authored-by: tanersemenn <0418>
2024-05-05 17:18:15 +00:00
Christophe Bornet
ba9dc04ffa docs: Add doc for hybrid search (#21245)
See
[preview](https://langchain-git-fork-cbornet-doc-hybrid-search-langchain.vercel.app/docs/use_cases/question_answering/hybrid/)

In the model of [per user
retrieval](https://python.langchain.com/docs/use_cases/question_answering/per_user/)
2024-05-04 08:22:56 -04:00
Rohan Aggarwal
8021d2a2ab community[minor]: Oraclevs integration (#21123)
Thank you for contributing to LangChain!

- Oracle AI Vector Search 
Oracle AI Vector Search is designed for Artificial Intelligence (AI)
workloads that allows you to query data based on semantics, rather than
keywords. One of the biggest benefit of Oracle AI Vector Search is that
semantic search on unstructured data can be combined with relational
search on business data in one single system. This is not only powerful
but also significantly more effective because you don't need to add a
specialized vector database, eliminating the pain of data fragmentation
between multiple systems.


- Oracle AI Vector Search is designed for Artificial Intelligence (AI)
workloads that allows you to query data based on semantics, rather than
keywords. One of the biggest benefit of Oracle AI Vector Search is that
semantic search on unstructured data can be combined with relational
search on business data in one single system. This is not only powerful
but also significantly more effective because you don't need to add a
specialized vector database, eliminating the pain of data fragmentation
between multiple systems.
This Pull Requests Adds the following functionalities
Oracle AI Vector Search : Vector Store
Oracle AI Vector Search : Document Loader
Oracle AI Vector Search : Document Splitter
Oracle AI Vector Search : Summary
Oracle AI Vector Search : Oracle Embeddings


- We have added unit tests and have our own local unit test suite which
verifies all the code is correct. We have made sure to add guides for
each of the components and one end to end guide that shows how the
entire thing runs.


- We have made sure that make format and make lint run clean.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: skmishraoracle <shailendra.mishra@oracle.com>
Co-authored-by: hroyofc <harichandan.roy@oracle.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-04 03:15:35 +00:00
ccurme
c9e9470c5a langchain: fix deprecation decorators on extraction chains (#21276)
Calling any of these raises
```
ValueError: A pending deprecation cannot have a scheduled removal
```
2024-05-03 18:29:40 -04:00
Wickes Wong
ee1adaacaa langchain[patch]: Fix summary buffer memory with return message flag (#21115)
## Description
Memory return could be set as `str` or `message` by `return_messages`
flag as mentioned in
https://python.langchain.com/docs/modules/memory/#whether-memory-is-a-string-or-a-list-of-messages,
where
`langchain.chains.conversation.memory.ConversationSummaryBufferMemory`
did not implement that.
This commit added `buffer_as_str` and `buffer_as_messages` function, and
`buffer` now affected by `return_messages` flag.

## Example Test Code and Output

```python
# Fix: ConversationSummaryBufferMemory with return_messages flag function
# Test code
from langchain.chains.conversation.memory import ConversationSummaryBufferMemory
from langchain_community.llms.ollama import Ollama

llm = Ollama()

# Create an instance of ConversationSummaryBufferMemory with return_messages set to True
memory = ConversationSummaryBufferMemory(return_messages=True, llm=llm)

# Add user and AI messages to the chat memory
memory.chat_memory.add_user_message("hi!")
memory.chat_memory.add_ai_message("what's up?")

# Print the buffer
print("Buffer:")
print(*map(type, memory.buffer), sep="\n")
print(memory.buffer, "\n")

# Print the buffer as a string
print("Buffer as String:")
print(type(memory.buffer_as_str))
print(memory.buffer_as_str, "\n")

# Print the buffer as messages
print("Buffer as Messages:")
print(*map(type, memory.buffer_as_messages), sep="\n")
print(memory.buffer_as_messages, "\n")

# Print the buffer after setting return_messages to False
memory.return_messages = False
print("Buffer after setting return_messages to False:")
print(type(memory.buffer))
print(memory.buffer, "\n")
```

```plaintext
Buffer:
<class 'langchain_core.messages.human.HumanMessage'>
<class 'langchain_core.messages.ai.AIMessage'>
[HumanMessage(content='hi!'), AIMessage(content="what's up?")] 

Buffer as String:
<class 'str'>
Human: hi!
AI: what's up? 

Buffer as Messages:
<class 'langchain_core.messages.human.HumanMessage'>
<class 'langchain_core.messages.ai.AIMessage'>
[HumanMessage(content='hi!'), AIMessage(content="what's up?")] 

Buffer after setting return_messages to False:
<class 'str'>
Human: hi!
AI: what's up? 
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-03 17:25:09 -04:00
Leonid Ganeline
9639457222 community[patch]: tools imports (#21156)
Issue: we have several helper functions to import third-party libraries
like tools.gmail.utils.import_google in
[community.tools](https://api.python.langchain.com/en/latest/community_api_reference.html#id37).
And we have core.utils.utils.guard_import that works exactly for this
purpose.
The import_<package> functions work inconsistently and rather be private
functions.
Change: replaced these functions with the guard_import function.

Related to #21133
2024-05-03 17:22:45 -04:00
Leonid Ganeline
3ef8b24277 core[patch]: utils.guard_import fix (#21133)
Issues (nit): 
1. `utils.guard_import` prints wrong error message when there is an
import `error.` It prints the whole `module_name` but should be only the
first part as the pip package name. E.i. `langchain_core.utils` -> print
not `langchain-core` but `langchain_core.utils`. Also replace '_' with
'-' in the pip package name.
2. it does not handle the `ModuleNotFoundError` which raised if
`guard_import("wrong_module")`

Fixed issues; added ut-s. Controversial: I've reraised
`ModuleNotFoundError` as `ImportError`, since in case of the error, the
proposed action is the same - we need to install a missed package.
2024-05-03 17:21:36 -04:00
Erick Friis
36c2ca3c8b mistralai: relax tokenizers dep (#21277) 2024-05-03 14:16:22 -07:00
Nuno Campos
6e1e0c7d5c fix: core: draw_mermaid() would create subgroup for edges with same src and tgt (#21275)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-05-03 13:51:08 -07:00
Eugene Yurtsev
26a37dce0a langchain[patch]: Remove jsonpatch from poetry file (#21272)
jsonpatch is only used in langchain-core not in langchain
2024-05-03 15:46:05 -04:00
Eugene Yurtsev
335bd01e45 langchain[patch]: Update deprecation warning (#21268)
Update deprecation warning
2024-05-03 15:31:29 -04:00
Leonid Ganeline
23a05c3986 langchain: summarize chain fix (#21266)
Issue: `load_summarize_chain` is placed in the __init__.py file. As a
result, it doesn't listed in the API Reference docs.
Change: moved code from __init__.py into a new file.
2024-05-03 14:44:39 -04:00
ccurme
6da3d92b42 (all): update removal in deprecation warnings from 0.2 to 0.3 (#21265)
We are pushing out the removal of these to 0.3.

`find . -type f -name "*.py" -exec sed -i ''
's/removal="0\.2/removal="0.3/g' {} +`
2024-05-03 14:29:36 -04:00
Eugene Yurtsev
d6e34f9ee5 langchain[patch]: Improve deprecation warnings (#21262)
* Remove spurious derprecation warning
* Make deprecation warnings consistent with 0.1 namespaces that were announced as deprecated
2024-05-03 13:40:16 -04:00
Eugene Yurtsev
487aff7e46 langchain[patch]: Revert 20794 until 0.2 release (#21257)
PR of 2079 was already released as part of 0.1.17rc.


Issue for 0.2 release:
https://github.com/langchain-ai/langchain/issues/21080
2024-05-03 17:02:48 +00:00
Eugene Yurtsev
ba4a309d98 langchain[patch]: Revert breaking change until 0.2 release (#21256)
Reverts a minor breaking change until 0.2 release
2024-05-03 09:42:27 -07:00
Eugene Yurtsev
66a1e3f083 langchain[patch]: Fix flaky unit test (#21258)
Should sort the results of the import test since it depends on import order
2024-05-03 15:55:46 +00:00
Eugene Yurtsev
0989c48028 langchain[minor]: Re-add deleted ainetwork tool (#21254)
* Adding __init__.py to turn it into a package in community
* Adding proxy imports that assume that langchain_community is optional
2024-05-03 11:39:40 -04:00
Christophe Bornet
2fbe82f5e6 community[minor]: Relax constraints on CassandraChatMessageHistory constructor (#21241) 2024-05-03 10:20:39 -04:00
Chris Germann
3a8d1d8838 Hotfix RetrievalQA Docs: docs: Fix formatting (#21183)
# Newline Characters breaking formatting 

**Description**: 
As you can see in the image below, the formatting in the documentation
is broken. As far as I can see the two added `\n` characters are
breaking the documentation. Therefore I would propose to remove those

![image](https://github.com/langchain-ai/langchain/assets/88305668/23b6e726-71b2-4812-91ea-3e8600683733)

**Dependencies**:
None

**Twitter Handle**
- epu9byj

---------

Co-authored-by: gere <gere@kapo.zh.ch>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-03 12:46:29 +00:00
andyjessen
64e17bd793 docs: Fix comment within "handle long text" example (#21248)
The current doc-string comment is referring to the wrong schema.
2024-05-03 12:36:53 +00:00
Daniel Glogowski
c3d169ab00 docs: Update Nvidia documentation (#21240)
Updating Nvidia docs ahead for 5/15 competition. 

Thanks!
2024-05-03 12:29:03 +00:00
Bagatur
70bde15480 docs: add tool choice to tool calling (#21229) 2024-05-03 03:10:22 -04:00
Bagatur
67a5cc34c6 openai[patch]: Release 0.1.6 (#21236) 2024-05-03 04:10:39 +00:00
Erick Friis
c1eb95b967 core: release 0.1.50 (#21230) 2024-05-02 22:44:18 +00:00
Nuno Campos
47ce8d5a57 core: tracer: remove numeric execution order (#21220)
- this hasn't been used in a long time and requires some additional
bookkeeping i'm going to streamline in the next pr
2024-05-02 15:38:55 -07:00
Bagatur
6ac6158a07 openai[patch]: support tool_choice="required" (#21216)
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-05-02 18:33:25 -04:00
Erick Friis
aa9faa8512 docs: model table keywords, remove tool calling from llm (#21225) 2024-05-02 21:04:29 +00:00
xindoo
c1aa237bc2 langchain: fix syntax error in code comment for create_tool_calling_agent (#21205)
**PR message**:
- **Description:** Corrected a syntax error in the code comments within
the `create_tool_calling_agent` function in the langchain package.
- **Issue:** N/A
- **Dependencies:** No additional dependencies required.
- **Twitter handle:** N/A
2024-05-02 19:17:23 +00:00
ccurme
eb0a2fd53a mistral: release 0.1.6 (#21214) 2024-05-02 13:59:19 -04:00
ccurme
2d77e5e3a1 (standard tests): add test for basic conversation sequence (#21213) 2024-05-02 13:47:10 -04:00
Maxime Perrin
1ebb5a70ad partners(mistralai): Removing unused variable in completion request (using tool_calls or content) (#21201)
This PR fixes #21196.

The error was occurring when calling chat completion API with a chat
history. Indeed, the Mistral API does not accept both `content` and
`tool_calls` in the same body.

This PR removes one of theses variables depending on the necessity.

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-05-02 13:20:14 -04:00
Christophe Bornet
683fb45c6b community[patch]: Refactor CassandraDatabase wrapper (#21075)
* Introduce individual `fetch_` methods for easier typing.
* Rework some docstrings to google style
* Move some logic to the tool
* Merge the 2 cassandra utility files
2024-05-02 13:13:08 -04:00
Bagatur
b00fd1dbde infra: Undo gh cache removal (#21210)
Co-authored-by: Nuno Campos <nuno@langchain.dev>
2024-05-02 17:12:32 +00:00
Aditya
ee2c55ca09 docs: Added documentation on Anthropic models on vertex (#21070)
Description:Added documentation on Anthropic models on Vertex
@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-05-02 13:12:01 -04:00
Raghav Dixit
7d451d0041 community[patch]: Update lancedb.py (#21192)
very minor update in LanceDB integration, 'metric' argument was missing.
2024-05-02 17:06:39 +00:00
Bagatur
d297d90ad9 core[patch]: Release 0.1.49 (#21211) 2024-05-02 17:06:27 +00:00
Nuno Campos
663747b730 core[patch]: Fixes for convert_messages (#21207)
- support two-tuples of any sequence type (eg. json.loads never produces
tuples)
- support type alias for role key
- if id is passed in in dict form use it
- if tool_calls passed in in dict form use them

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-02 16:55:42 +00:00
Eugene Yurtsev
df49404794 langchain[patch]: Make more memory code handle community dependency as optional (#21199) 2024-05-02 11:05:26 -04:00
ccurme
bd5d2c2674 langchain: import InMemoryChatMessageHistory from core (#21198) 2024-05-02 14:53:07 +00:00
Eugene Yurtsev
3cd7fced5f langchain[patch],community[minor]: Migrate memory implementations to community (#20845)
Migrates memory implementations to community
2024-05-02 10:46:50 -04:00
Eugene Yurtsev
b5c3a04e4b langchain[patch]: chat histories to handle optional community dependence (#21194) 2024-05-02 10:36:08 -04:00
Eugene Yurtsev
c9119b0e75 langchain[patch],community[minor]: Move some unit tests from langchain to community, use core for fake models (#21190) 2024-05-02 09:57:52 -04:00
Eugene Yurtsev
c306364b06 langchain[patch]: Update more code to use langchain community as an optional dependency (#21170)
More code to use langchain community as an optional dependency
2024-05-02 09:05:48 -04:00
Erick Friis
cd4c54282a infra: cleanup docs build (#21134)
Refactors the docs build in order to:
- run the same `make build` command in both vercel and local build
- incrementally build artifacts in 2 distinct steps, instead of building
all docs in-place (in vercel) or in a _dist dir (locally)

Highlights:
- introduces `make build` in order to build the docs
- collects and generates all files for the build in
`docs/build/intermediate`
- renders those jupyter notebook + markdown files into
`docs/build/outputs`

And now the outputs to host are in `docs/build/outputs`, which will need
a vercel settings change.

Todo:
- [ ] figure out how to point the right directory (right now deleting
and moving docs dir in vercel_build.sh isn't great)
2024-05-01 17:34:05 -07:00
Bagatur
6fa8626e2f openai[patch]: fix azure open lc serialization, release 0.1.5 (#21159) 2024-05-01 18:03:29 -04:00
Eugene Yurtsev
94a838740e langchain[patch]: Migrate more code in utils to use optional langchain import (#21166)
Moving is interactive util to avoid circular deps
2024-05-01 17:18:42 -04:00
Eugene Yurtsev
23fdd320bc langchain[patch]: Migrate more code to use optional community in agents namespace (#21167) 2024-05-01 16:25:44 -04:00
Tomaz Bratanic
9e53fa7d2e Some more fixes to neo4j enhanced schema (#21139) 2024-05-01 13:12:43 -07:00
Erick Friis
0694538c39 ai21: fix core version (#21168) 2024-05-01 13:10:22 -07:00
Eugene Yurtsev
44602bdc20 langchain[patch],community[minor]: Move load_tools to community (#21158)
Move load tools to community
2024-05-01 16:05:41 -04:00
Eugene Yurtsev
9932f49b3e langchain[patch]: Migrate llms to use optional community imports (#21101) 2024-05-01 16:04:45 -04:00
Eugene Yurtsev
57e8e70daa langchain[patch]: Migrate chat models to optional community imports (#21090)
Migrate chat models to optional community imports
2024-05-01 16:04:12 -04:00
Eugene Yurtsev
2914abd747 langchain[patch]: Fix how the serializable test identifies serializable objects (#21165)
dir() will not work if we're using optional imports. The only way to do this is by using contents of __all__
2024-05-01 15:56:11 -04:00
Eugene Yurtsev
23c5d87311 langchain[patch]: Migrate utils to use optional langchain_community (#21163)
Migrate utils to use optional imports from langchain community
2024-05-01 15:24:02 -04:00
Eugene Yurtsev
bec3eee3fa langchain[patch]: Migrate retrievers to use optional langchain community imports (#21155) 2024-05-01 14:44:44 -04:00
Eugene Yurtsev
43110daea5 langchain[patch]: Update some agent tool kits to handle community import as optional (#21157)
A few things that were not caught by the migration script
2024-05-01 14:22:54 -04:00
Eugene Yurtsev
59f10ab3e0 langchain[patch]: Migrate embeddings to optional imports (#21099) 2024-05-01 13:47:37 -04:00
Eugene Yurtsev
2f709d94d7 langchain[patch]: Migrate vectorstores to use optional langchain community imports (#21150) 2024-05-01 13:33:37 -04:00
Eugene Yurtsev
7230e430db langchain[patch]: Migrate top level files to use optional langchain community (#21152)
Migrate a few top level files to treat langchain community as an optional dependency
2024-05-01 13:23:03 -04:00
Erick Friis
daab9789a8 ai21: release 0.1.4 (#21151) 2024-05-01 17:16:27 +00:00
Asaf Joseph Gardin
642975dd9f partners: AI21 Labs Jamba Support (#20815)
Description: Added support for AI21 new model - Jamba
Twitter handle: https://github.com/AI21Labs

---------

Co-authored-by: Asaf Gardin <asafg@ai21.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-05-01 10:12:44 -07:00
Eugene Yurtsev
7a39fe60da langchain[patch]: Migrate utilities to handle langchain community as optional (#21149) 2024-05-01 13:09:34 -04:00
Eugene Yurtsev
b879184595 langchain[patch]: embedddings distance move import of openai embeddings into local scope (#21148) 2024-05-01 12:51:51 -04:00
Bagatur
8b4b75e543 docs: standardize vertexai params (#20167)
Related to #20085

Requires https://github.com/langchain-ai/langchain-google/pull/121
2024-05-01 11:42:18 -04:00
Eugene Yurtsev
0e5bf16d00 langchain[patch]: Migrate document loaders to use optional langchain community imports (#21095) 2024-05-01 11:26:25 -04:00
Jacob Lee
bd38073d76 👥 Update LangChain people data (#21143)
👥 Update LangChain people data

Co-authored-by: github-actions <github-actions@github.com>
2024-05-01 11:01:43 -04:00
Harrison Chase
4d1c21d97d community[patch]: Fix alternative name in deprecation notice for sql_database (#21144) 2024-05-01 10:59:42 -04:00
East Agile
2a6f78a53f community[minor]: Rememberizer retriever (#20052)
**Description:**
This pull request introduces a new feature for LangChain: the
integration with the Rememberizer API through a custom retriever.
This enables LangChain applications to allow users to load and sync
their data from Dropbox, Google Drive, Slack, their hard drive into a
vector database that LangChain can query. Queries involve sending text
chunks generated within LangChain and retrieving a collection of
semantically relevant user data for inclusion in LLM prompts.
User knowledge dramatically improved AI applications.
The Rememberizer integration will also allow users to access general
purpose vectorized data such as Reddit channel discussions and US
patents.

**Issue:**
N/A

**Dependencies:**
N/A

**Twitter handle:**
https://twitter.com/Rememberizer
2024-05-01 10:41:44 -04:00
Eugene Yurtsev
1ce1a10f2b langchain[patch],community[minor]: Move graph index creator (#20795)
Move graph index creator to community
2024-05-01 10:04:30 -04:00
Eugene Yurtsev
aa0bc7467c langchain[patch]: Migrate agents module into optional imports for community (#21088) 2024-05-01 09:36:03 -04:00
Eugene Yurtsev
86ff8a3fb4 langchain[patch]: Update docstore module to use optional imports from community (#21091) 2024-05-01 09:35:05 -04:00
Eugene Yurtsev
d640605694 langchain[patch]: Migrate chat loaders to optional community imports (#21089)
Migrate chat loaders to optional community imports
2024-05-01 09:34:44 -04:00
Charlie Marsh
2b10c4dd52 ci: Use ruff check in Makefile (#21138)
## Summary

`ruff /path/to/file.py` works but is deprecated, and we now recommend
`ruff check /path/to/file.py` (to match `ruff format /path/to/file.py`).
2024-05-01 09:34:15 -04:00
Eugene Yurtsev
2fcab9acd9 langchain[patch]: Upgrade storage to treat langchain community as optional (#21105) 2024-05-01 09:33:31 -04:00
William FH
ab55f6996d [Core] Tracing: update parent run_tree's child_runs (#21049) 2024-05-01 06:33:08 -07:00
Abhishek Bhagwat
86fe484e24 docs: Docs (sample notebook) for Vertex DIY RAG Ranking API (#21054)
Vertex DIY RAG APIs helps to build complex RAG systems and provide more
granular control, and are suited for custom use cases.

The Ranking API takes in a list of documents and reranks those documents
based on how relevant the documents are to a given query. Compared to
embeddings that look purely at the semantic similarity of a document and
a query, the ranking API can give you a more precise score for how well
a document answers a given query.


[Reference](https://cloud.google.com/generative-ai-app-builder/docs/ranking)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-01 05:39:39 +00:00
Stuart Leeks
8a01760a0f infra: Sync devcontainer.json and compose file mount location (#20461)
**Sync the config in `devcontainer.json` and `docker-compose.yml`**

Issue: when opening the current `master` branch in a dev container in VS
Code, I get the following message as VS Code cannot find the mounted
source folder:


![image](https://github.com/langchain-ai/langchain/assets/1824461/41cf20c0-d1e0-4648-9578-edf80b99c2db)

Opening in a GitHub Codespace works (it seems to ignore the mounts in
the `docker-compose.yml`.

This PR updates the mount in `docker-compose.yml` and the config in
`devcontainer.json` so that the two align.

I have tested these changes in GitHub Codespaces and a VS Code dev
container and both loaded successfully.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-01 01:32:12 -04:00
aditya thomas
12b1caf295 openai[patch]: add tests for secret_str for keys (#20982)
**Description:** Add tests to check API keys and Active Directory tokens
are masked
**Issue:** Resolves #12165 for OpenAI and Azure OpenAI models
**Dependencies:** None

Also resolves #12473 which may be closed.

Additional contributors @alex4321 (#12473) and @onesolpark (#12542)
2024-05-01 01:26:20 -04:00
Noah
45ddf4d26f community[patch]: Update comments for lazy_load method (#21063)
- [ ] **PR message**: 
- **Description:** Refactored the lazy_load method to use asynchronous
execution for improved performance. The method now initiates scraping of
all URLs simultaneously using asyncio.gather, enhancing data fetching
efficiency. Each Document object is yielded immediately once its content
becomes available, streamlining the entire process.
    - **Issue:** N/A
- **Dependencies:** Requires the asyncio library for handling
asynchronous tasks, which should already be part of standard Python
libraries in Python 3.7 and above.
    - **Email:** [r73327118@gmail.com](mailto:r73327118@gmail.com)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-01 01:20:57 -04:00
Liu Xiaodong
3b473d10f2 experimental: clean python repl input(experimental:Added code for PythonREPL) (#20930)
Update python.py(experimental:Added code for PythonREPL)

Added code for PythonREPL, defining a static method 'sanitize_input'
that takes the string 'query' as input and returns a sanitizing string.
The purpose of this method is to remove unwanted characters from the
input string, Specifically:

1. Delete the whitespace at the beginning and end of the string (' \s').
2. Remove the quotation marks (`` ` ``) at the beginning and end of the
string.
3. Remove the keyword "python" at the beginning of the string (case
insensitive) because the user may have typed it.

This method uses regular expressions (regex) to implement sanitizing.

It all started with this code:
from langchain.agents import Tool
from langchain_experimental.utilities import PythonREPL

python_repl = PythonREPL()
repl_tool = Tool(
    name="python_repl",
description="Remove redundant formatting marks at the beginning and end
of source code from input.Use a Python shell to execute python commands.
If you want to see the output of a value, you should print it out with
`print(...)`.",
    func=python_repl.run,
)

When I call the agent to write a piece of code for me and execute it
with the defined code, I must get an error: SyntaxError('invalid
syntax', ('<string>', 1, 1,'In', 1, 2))

After checking, I found that pythonREPL has less formatting of input
code than the soon-to-be deprecated pythonREPL tool, so I added this
step to it, so that no matter what code I ask the agent to write for me,
it can be executed smoothly and get the output result.
I have tried modifying the prompt words to solve this problem before,
but it did not work, and by adding a simple format check, the problem is
well resolved.
<img width="1271" alt="image"
src="https://github.com/langchain-ai/langchain/assets/164149097/c49a685f-d246-4b11-b655-fd952fc2f04c">

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-05-01 05:19:09 +00:00
Ismail Hossain Polas
1fdf63fa6c community[patch]: update package name to bagelML (#19948)
**Description**
This pull request updates the Bagel Network package name from
"betabageldb" to "bagelML" to align with the latest changes made by the
Bagel Network team.

The following modifications have been made:

- Updated all references to the old package name ("betabageldb") with
the new package name ("bagelML") throughout the codebase.
- Modified the documentation, and any relevant scripts to reflect the
package name change.
- Tested the changes to ensure that the functionality remains intact and
no breaking changes were introduced.

By merging this pull request, our project will stay up to date with the
latest Bagel Network package naming convention, ensuring compatibility
and smooth integration with their updated library.

Please review the changes and provide any feedback or suggestions. Thank
you!
2024-05-01 01:17:33 -04:00
Tomaz Bratanic
7860e4c649 experimental[patch]: Add support for non-function calling LLMs in llm graph transformers (#21014) 2024-05-01 01:16:07 -04:00
Erick Friis
67e6744e0f docs: fix some notebook formatting (#21136) 2024-04-30 21:39:03 -07:00
tianzedavid
5a8909440b docs: remove repetitive words (#21058)
remove repetitive words

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-05-01 01:10:42 +00:00
Leonid Kuligin
a36935b520 docs: updated docs on langchain_google_community (#21064)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: updated docs on langchain_google_community"


- [ ] **PR message**:
    - **Description:** updated docs on langchain_google_community
2024-04-30 20:20:49 -04:00
Tomaz Bratanic
c9e96bb5e2 community[patch]: Fix neo4j enhanced schema bugs (#21072) 2024-04-30 20:16:26 -04:00
junkeon
8d2909ee25 upstage[minor]: Update few codes and add upstage loader in pdf section (#21085)
**Description:** Update UpstageLayoutAnalysisParser and Loader and add
upstage loader example in pdf section
**Dependencies:** langchain_community
**Twitter handle:** [@upstageai](https://twitter.com/upstageai)

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-30 20:15:49 -04:00
Bagatur
bef50ded63 openai[patch]: fix special token default behavior (#21131)
By default handle special sequences as regular text
2024-04-30 20:08:24 -04:00
MacanPN
0f7f448603 community[patch]: add delete() method to AzureSearch vector store (#21127)
**Issue:**
Currently `AzureSearch` vector store does not implement `delete` method.
This PR implements it. This also makes it compatible with LangChain
indexer.

**Dependencies:**
None

**Twitter handle:**
@martintriska1

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-30 23:46:18 +00:00
Jorge Piedrahita Ortiz
3441a11b21 docs: minor changes in sambanova community integration docs (#21129)
- **Description:** minor changes in sambanova community integration
notebook docs

---------

Co-authored-by: Renate Kempf <165940384+renate-snova@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-30 23:44:26 +00:00
Bagatur
6d3e9eaf84 docs: format (#21132) 2024-04-30 23:32:41 +00:00
Erick Friis
14422a4220 langchain: fix core dep (#21128) 2024-04-30 14:55:12 -07:00
Erick Friis
6c938da302 langchain: release 0.1.17 (#21125) 2024-04-30 14:43:59 -07:00
Erick Friis
5f8a307565 infra: same tagging for langchain (#21126) 2024-04-30 14:43:45 -07:00
Eugene Yurtsev
bf95414758 langchain[minor]: enhance unit test to test imports recursively (#21122) 2024-04-30 17:05:53 -04:00
Eugene Yurtsev
e4f51f59a2 langchain[patch]: Migrate tools to treat community imports as optional (#21117)
Migrate tools to treat community imports as optional
2024-04-30 16:26:18 -04:00
Eugene Yurtsev
9e788f09c6 langchain[patch]: Migrate output parsers to support optional community imports (#21103)
Migrate output parsers
2024-04-30 16:24:29 -04:00
Eugene Yurtsev
3853fe9f64 langchain[patch]: Migrate graphs to use optional community imports (#21100)
Migrate graphs to use optional community imports.
2024-04-30 16:24:06 -04:00
Eugene Yurtsev
8658d52587 langchain[patch]: Upgrade prompts to optional imports (#21078)
Upgrades prompts module to use optional imports.

This code was generated with a migration script, but had to be adjusted
manually a bit.

Testing in preparation for applying this code modification across the
rest of the modules in langchain package to reverse the dependency
between langchain community and langchain.
2024-04-30 16:23:39 -04:00
Eugene Yurtsev
9b6d04a187 langchain[patch]: Migrate document transformers (#21098)
Migrate document transformers
2024-04-30 16:20:02 -04:00
Eugene Yurtsev
aec13a6123 langchain[patch]: Migrate callbacks module to use optional imports for community (#21086) 2024-04-30 16:19:13 -04:00
Erick Friis
8a62fb0570 community: release 0.0.36 (#21118) 2024-04-30 13:18:44 -07:00
Erick Friis
2407c353be core: release 0.1.48 (#21113) 2024-04-30 19:52:36 +00:00
Erick Friis
dbdfa3d34e infra: fix minimum version install to force pypi install (#21112) 2024-04-30 12:41:26 -07:00
Charlie Marsh
fd94aa8366 partner[patch]: Upgrade to Ruff v0.4.2 (#21108)
## Summary

No new diagnostics (given that the set of enabled rules hasn't changed),
but gains access to our new parser (much faster) and reduced false
positives all around.
2024-04-30 15:06:42 -04:00
Jamsheed Mistri
3e749369ef community[minor]: bump version of LayerupSecurity, add support for untrusted_input parameter (#19985)
**Description:** update version of LayerupSecurity package for the
Layerup Security integration. Add untrusted_input parameter.
2024-04-30 14:55:26 -04:00
fubuki8087
f1c3687aa5 community[patch]: Using the right encoding to parse the web page in RecursiveUrlLoader (#20632)
As shown in #13749 , `RecursiveUrlLoader` has encoding issue. This PR is
to solve this.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-30 18:41:36 +00:00
Jakub Pawłowski
b0b1a67771 community[patch]: Skip unexpected 404 HTTP Error in Arxiv download (#21042)
### Description:
When attempting to download PDF files from arXiv, an unexpected 404
error frequently occurs. This error halts the operation, regardless of
whether there are additional documents to process. As a solution, I
suggest implementing a mechanism to ignore and communicate this error
and continue processing the next document from the list.

Proposed Solution: To address the issue of unexpected 404 errors during
PDF downloads from arXiv, I propose implementing the following solution:

- Error Handling: Implement error handling mechanisms to catch and
handle 404 errors gracefully.
- Communication: Inform the user or logging system about the occurrence
of the 404 error.
- Continued Processing: After encountering a 404 error, continue
processing the remaining documents from the list without interruption.

This solution ensures that the application can handle unexpected errors
without terminating the entire operation. It promotes resilience and
robustness in the face of intermittent issues encountered during PDF
downloads from arXiv.

### Issue:
#20909 
### Dependencies:
none

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-30 18:29:22 +00:00
Erick Friis
b9c53e95b7 community: release 0.0.35 (#21104) 2024-04-30 17:48:56 +00:00
Eugene Yurtsev
3c064a757f core[minor],langchain[patch],community[patch]: Move storage interfaces to core (#20750)
* Move storage interface to core
* Move in memory and file system implementation to core
2024-04-30 13:14:26 -04:00
Charlie Marsh
8f38b7a725 multiple: Remove unnecessary Ruff suppression comments (#21050)
## Summary

I ran `ruff check --extend-select RUF100 -n` to identify `# noqa`
comments that weren't having any effect in Ruff, and then `ruff check
--extend-select RUF100 -n --fix` on select files to remove all of the
unnecessary `# noqa: F401` violations. It's possible that these were
needed at some point in the past, but they're not necessary in Ruff
v0.1.15 (used by LangChain) or in the latest release.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-30 17:13:48 +00:00
Erick Friis
748f2ba9ea core: release 0.1.47 (#21094) 2024-04-30 09:22:05 -07:00
Erick Friis
efe27ef849 infra: tag non-langchain releases (#20805) 2024-04-30 16:15:46 +00:00
Eugene Yurtsev
c8f18a2524 langchain[patch]: Update import handling in adapters (#21079) 2024-04-30 10:55:29 -04:00
William FH
5c63ac3dd7 [Patch] Dedent docstring (#20959)
Technically a slight prompt breaking change, but I think positive EV in
that it saves tokens and results in more sane / in-distribution prompts
2024-04-30 07:40:57 -07:00
Eugene Yurtsev
845d8e0025 langchain[patch]: Update handling of deprecation warnings (#21083)
Chains should not be emitting deprecation warnings.
2024-04-30 10:30:23 -04:00
Christophe Bornet
5c77f45b06 community[minor]: Add async methods to CassandraCache and CassandraSemanticCache (#20654) 2024-04-30 10:27:44 -04:00
Christophe Bornet
d6e9bd3011 docs: Bump cassio min version in docs (#21081)
Cassio 0.6+ is recommended for async vector store (not blocking on
getting the embedding dimension) and for hybrid search support.
2024-04-30 10:25:37 -04:00
William FH
db14d4326d [Core] Feat Pretty Print Tool calls (#20997)
Right now, `tool_calls` are not included in the `pretty_print()` output.
Would be nice to show!


![image](https://github.com/langchain-ai/langchain/assets/13333726/6a0ffca3-d02f-4e18-bc76-513eeca2e964)
2024-04-30 07:14:43 -07:00
Kuro Denjiro
fa4124b821 community[minor]: add mintbase loader to langchain (#20089)
- [x] **Add Near NFT loader**: "community: Load NFT near block chain
using mintbase graph API"

- [x] **PR message**: 
    - **Description:** a description of the change
    - **Twitter handle:**Kurodenjiro

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-30 04:11:56 +00:00
Alexander Dicke
d7e12750df community[patch]: allows using text-generation-inference /generate route with HuggingFaceEndpoint (#20100)
- **Description:** allows to use the /generate route of
`text-generation-inference` with the `HuggingFaceEndpoint`
2024-04-29 23:09:55 -04:00
Jonathan Evans
ea43c669f2 community[patch]: Fix Bedrock Mistral stop sequence request key (#20115)
- **Description:** Change Bedrock's Mistral stop sequence key mapping to
"stop" rather than "stop_sequences" which is the correct key [Bedrock
docs
link](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral.html)
`{
    "prompt": string,
    "max_tokens" : int,
    "stop" : [string],    
    "temperature": float,
    "top_p": float,
    "top_k": int
}`
- **Issue:** #20053 
- **Dependencies:** N/A
- **Twitter handle:** N/a
2024-04-29 20:14:36 -04:00
davidkgp
28b0b0d863 community[patch]: Fix for github issue #17690 (#20117)
…/17690

Thank you for contributing to LangChain!

- [x] **Fix Google Lens knowledge graph issue**: "langchain: community"
- Fix for [No "knowledge_graph" property in Google Lens API call from
SerpAPI](https://github.com/langchain-ai/langchain/issues/17690)


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** handled the existence of keys in the json response of
Google Lens
- **Issue:** [No "knowledge_graph" property in Google Lens API call from
SerpAPI](https://github.com/langchain-ai/langchain/issues/17690)



- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-30 00:10:08 +00:00
高远
a7a4630bf4 community[patch]: Modify the text field type and add new exception handling (#20116)
Co-authored-by: gaoyuan <gaoyuan.20001218@bytedance.com>
2024-04-29 20:06:00 -04:00
Rahul Triptahi
c172611647 community[patch]: Add classifier_url argument in PebbloSafeLoader and documentation update. (#21030)
Description: Add classifier_url argument in PebbloSafeLoader.
Documentation: Updated PebbloSafeLoader documentation with above change
and new links for pebblo github pages.

---------

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-04-29 17:41:09 -04:00
Leonid Ganeline
08d08d7c83 docs: langchain docstrings updates (#21032)
Added missed docstings. Formatted docstrings into a consistent format.
2024-04-29 17:40:44 -04:00
Leonid Ganeline
85094cbb3a docs: community docstring updates (#21040)
Added missed docstrings. Updated docstrings to consistent format.
2024-04-29 17:40:23 -04:00
Rodrigo Nogueira
90f19028e5 community[patch]: Add maritalk streaming (sync and async) (#19203)
Co-authored-by: RosevalJr <rdmalajr@gmail.com>
Co-authored-by: Roseval Donisete Malaquias Junior <roseval@maritaca.ai>
2024-04-29 21:31:14 +00:00
Cahid Arda Öz
cc6191cb90 community[minor]: Add support for Upstash Vector (#20824)
## Description

Adding `UpstashVectorStore` to utilize [Upstash
Vector](https://upstash.com/docs/vector/overall/getstarted)!

#17012 was opened to add Upstash Vector to langchain but was closed to
wait for filtering. Now filtering is added to Upstash vector and we open
a new PR. Additionally, [embedding
feature](https://upstash.com/docs/vector/features/embeddingmodels) was
added and we add this to our vectorstore aswell.

## Dependencies

[upstash-vector](https://pypi.org/project/upstash-vector/) should be
installed to use `UpstashVectorStore`. Didn't update dependencies
because of [this comment in the previous
PR](https://github.com/langchain-ai/langchain/pull/17012#pullrequestreview-1876522450).

## Tests

Tests are added and they pass. Tests are naturally network bound since
Upstash Vector is offered through an API.

There was [a discussion in the previous PR about mocking the
unittests](https://github.com/langchain-ai/langchain/pull/17012#pullrequestreview-1891820567).
We didn't make changes to this end yet. We can update the tests if you
can explain how the tests should be mocked.

---------

Co-authored-by: ytkimirti <yusuftaha9@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 17:25:01 -04:00
Leonid Ganeline
1a2ff56cd8 core[patch[: docstring update (#21036)
Added missed docstrings. Updated docstrings to consistent format.
2024-04-29 15:35:34 -04:00
Eugene Yurtsev
f479a337cc langchain[patch]: replace deprecated imports with imports from langchain_core (#21033)
* Output of running the migration script.
* Ran only against langchain code itself and not the unit tests.
2024-04-29 15:34:31 -04:00
Eugene Yurtsev
82d4afcac0 langchain[minor]: Code to handle dynamic imports (#20893)
Proposing to centralize code for handling dynamic imports. This allows treating langchain-community as an optional dependency.

---

The proposal is to scan the code base and to replace all existing imports with dynamic imports using this functionality.
2024-04-29 15:34:03 -04:00
Erick Friis
854ae3e1de mistralai: release 0.1.5, allow client passing in (#21034) 2024-04-29 17:14:26 +00:00
chyroc
3e241956d3 community[minor]: add coze chat model (#20770)
add coze chat model, to call coze.com apis
2024-04-29 12:26:16 -04:00
Eugene Yurtsev
29493bb598 cli[minor]: improve confirmation message with more details (#21027)
Improve confirmation message with more details
2024-04-29 12:20:42 -04:00
Eugene Yurtsev
aab78a37f3 cli[patch]: Ignore imports that change the name of the class (#21026)
Not currently handeled by migration script
2024-04-29 12:20:30 -04:00
Massimiliano Pronesti
ce89b34fc0 community[patch]: support hybrid search with threshold in Azure AI Search Retriever (#20907)
Support hybrid search with a score threshold -- similar to what we do
for similarity search.
2024-04-29 12:11:44 -04:00
Andrei Panferov
b3efa38cc0 community[patch]: GigaChat model selection fix (#20988)
Fixed the error that the model name is never actually put into GigaChat
request payload, always defaulting to `GigaChat-Lite`.

With this fix, model selection through
```python
import os
from langchain.chat_models.gigachat import GigaChat

chat = GigaChat(
    name="GigaChat-Pro", # <- HERE!!!!!
    ...
)
```
should actually work, as intended in
[here](804390ba4b/libs/community/langchain_community/llms/gigachat.py (L36)).

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-29 16:08:26 +00:00
Patrick McFadin
3331865f6b community[minor]: add Cassandra Database Toolkit (#20246)
**Description**: ToolKit and Tools for accessing data in a Cassandra
Database primarily for Agent integration. Initially, this includes the
following tools:
- `cassandra_db_schema` Gathers all schema information for the connected
database or a specific schema. Critical for the agent when determining
actions.
- `cassandra_db_select_table_data` Selects data from a specific keyspace
and table. The agent can pass paramaters for a predicate and limits on
the number of returned records.
- `cassandra_db_query` Expiriemental alternative to
`cassandra_db_select_table_data` which takes a query string completely
formed by the agent instead of parameters. May be removed in future
versions.

Includes unit test and two notebooks to demonstrate usage. 

**Dependencies**: cassio
**Twitter handle**: @PatrickMcFadin

---------

Co-authored-by: Phil Miesle <phil.miesle@datastax.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 15:51:43 +00:00
Igor Brai
b3e74f2b98 community[minor]: add mojeek search util (#20922)
**Description:** This pull request introduces a new feature to community
tools, enhancing its search capabilities by integrating the Mojeek
search engine
**Dependencies:** None

---------

Co-authored-by: Igor Brai <igor@mojeek.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-04-29 15:49:53 +00:00
hmn falahi
4822beb298 Ignore self/cls from required args of class functions in convert_to_openai_tool (#20691)
Removed redundant self/cls from required args of class functions in
_get_python_function_required_args:

```python
class MemberTool:
    def search_member(
            self,
            keyword: str,
            *args,
            **kwargs,
    ):
        """Search on members with any keyword like first_name, last_name, email

        Args:
            keyword: Any keyword of member
        """

        headers = dict(authorization=kwargs['token'])
        members = []
        try:
            members = request_(
                method='SEARCH',
                url=f'{service_url}/apiv1/members',
                headers=headers,
                json=dict(query=keyword),
            )

        except Exception as e:
            logger.info(e.__doc__)

        return members

convert_to_openai_tool(MemberTool.search_member)
```
expected result:
```
{'type': 'function', 'function': {'name': 'search_member', 'description': 'Search on members with any keyword like first_name, last_name, username, email', 'parameters': {'type': 'object', 'properties': {'keyword': {'type': 'string', 'description': 'Any keyword of member'}}, 'required': ['keyword']}}}
```

#20685

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 11:46:26 -04:00
Rahul Triptahi
a64a1943fd docs: Document update for load_extended_matadata in GoogleDriveLoader (#20950)
Document: Updated google_drive,ipynb for loading following extended
metadata.
 - full_path - Full path of the file/s in google drive.
 - owner - owner of the file/s.
 - size - size of the file/s.

Code changes:
[langchain-google/pull/179.](https://github.com/langchain-ai/langchain-google/pull/179)

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-29 11:41:57 -04:00
Eugene Yurtsev
4f4ee8e2cf cli[patch]: Update migrations file manually (#21021)
We need to replace occurrences in the code of RunnableMap not just the
import,
so for now, we don't replace RunnableMap.
2024-04-29 10:53:31 -04:00
Tomaz Bratanic
67428c4052 community[patch]: Neo4j enhanced schema (#20983)
Scan the database for example values and provide them to an LLM for
better inference of Text2cypher
2024-04-29 10:45:55 -04:00
Leonid Kuligin
dc70c23a11 docs: switched GCSLoaders docs to langchain-google-community (#20985)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: switched GCSLoaders docs to
langchain-google-community"

- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** switched GCSLoaders docs to
langchain-google-community
2024-04-29 10:45:11 -04:00
aditya thomas
8b59bddc03 anthropic[patch]: add tests for secret_str for api key (#20986)
**Description:** Add tests to check API keys are masked
**Issue:** Resolves
https://github.com/langchain-ai/langchain/issues/12165 for Anthropic
models
**Dependencies:** None
2024-04-29 10:39:14 -04:00
Pengcheng Liu
1fad39be1c community[minor]: Add LarkSuite wiki document loader. (#21016)
**Description:** Add LarkSuite wiki document loader. Refer to [LarkSuite
api document
](https://open.feishu.cn/document/server-docs/docs/wiki-v2/space-node/list)for
details.
**Issue:** None
**Dependencies:** None
**Twitter handle:** None
2024-04-29 10:37:50 -04:00
Tomaz Bratanic
d36332476c docs: Add neo4j relationship vector index docs (#20990)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 14:36:47 +00:00
Leonid Ganeline
dc7c06bc07 community[minor]: import fix (#20995)
Issue: When the third-party package is not installed, whenever we need
to `pip install <package>` the ImportError is raised.
But sometimes, the `ValueError` or `ModuleNotFoundError` is raised. It
is bad for consistency.
Change: replaced the `ValueError` or `ModuleNotFoundError` with
`ImportError` when we raise an error with the `pip install <package>`
message.
Note: Ideally, we replace all `try: import... except... raise ... `with
helper functions like `import_aim` or just use the existing
[langchain_core.utils.utils.guard_import](https://api.python.langchain.com/en/latest/utils/langchain_core.utils.utils.guard_import.html#langchain_core.utils.utils.guard_import)
But it would be much bigger refactoring. @baskaryan Please, advice on
this.
2024-04-29 10:32:50 -04:00
Karim Lalani
2ddac9a7c3 experimental[minor]: Add bind_tools and with_structured_output functions to OllamaFunctions (#20881)
Implemented bind_tools for OllamaFunctions.
Made OllamaFunctions sub class of ChatOllama.
Implemented with_structured_output for OllamaFunctions.

integration unit test has been updated.
notebook has been updated.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-29 14:13:33 +00:00
Eugene Yurtsev
d781560722 cli[minor]: Add ipynb support, add text_splitters (#20963) 2024-04-29 10:11:21 -04:00
Vadym Barda
5e0b6b3e75 docs: update langserve link in LCEL docs (#20992) 2024-04-29 09:06:10 -04:00
Aditya
07ce39bfe7 docs: updated tutorials for Image generation and Vector Search (#21000)
Description: docs: updated tutorials for Image generation and Vector
Search

@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-04-29 09:04:11 -04:00
Aditya
17bbb7d2a5 docs: updated tutorial for Gemini versions, included safety attribute updates (#21006)
Description:updated tutorial for Gemini versions, included safety
attribute updates

@lkuligin For review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
2024-04-29 09:01:54 -04:00
WilliamEspegren
804390ba4b community: Spider integration (#20937)
Added the [Spider.cloud](https://spider.cloud) document loader.
[Spider](https://github.com/spider-rs/spider) is the
[fastest](https://github.com/spider-rs/spider/blob/main/benches/BENCHMARKS.md)
and cheapest crawler that returns LLM-ready data.

```
- **Description:** Adds Spider data loader
- **Dependencies:** spider-client
- **Twitter handle:** @WilliamEspegren 
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: = <=>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-27 21:45:03 +00:00
Jamie Lemon
6342217b93 docs: Moves "Using PyMuPDF" to higher up the page. (#20832)
**Description:**
This PR moves the **PyMuPDF** PDF loader solution to be underneath
**PyPDF**. This is because it is the the 2nd most popular PyPI package
after **PyPDF**.

Please refer to these numbers, at the time of writing as follows:

PyPDF
https://www.pepy.tech/projects/PyPDF2
160 million

PyMuPDF
https://www.pepy.tech/projects/pymupdf
60 million

PDFPlumber
https://www.pepy.tech/projects/pdfplumber
23 million

PDFMiner
https://www.pepy.tech/projects/pdfminer
16 million

PyPDFium2
https://www.pepy.tech/projects/pypdfium2
8 million

Unstructured
https://www.pepy.tech/projects/unstructured
8 million


Please note I am an active contributor to
https://github.com/pymupdf/PyMuPDF

Many thanks!

----

**Twitter handle:**
@artifex
2024-04-27 20:40:20 +00:00
Chouaieb Nemri
8097bec472 Added LogEntry, Any, Dict, List, Optional, TypedDict imports (#20970)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: docs"

- [ ] **PR message**:
- **Description:** Uptaded docs: Rag streaming use-cases notebook with
LogEntry, Any, Dict, List, Optional, TypedDict imports
    - **Twitter handle:** c_nemri

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-27 20:13:54 +00:00
ccurme
9ec7151317 fireworks: fix integration tests (#20973) 2024-04-27 19:49:46 +00:00
William FH
9fa9f05e5d Catch System Error in ast parse (#20961)
I can't seem to reproduce, but i got this:

```
SystemError: AST constructor recursion depth mismatch (before=102, after=37)
```

And the operation isn't critical for the actual forward pass so seems
preferable to expand our caught exceptions
2024-04-26 19:31:55 -07:00
YH
2aca7fcdcf core[patch]: Enhance link extraction with query parameters (#20259)
**Description**: This update enhances the `extract_sub_links` function
within the `langchain_core/utils/html.py` module to include query
parameters in the extracted URLs.

**Issue**: N/A

**Dependencies**: No additional dependencies required for this change.

**Twitter handle**: N/A

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-27 02:22:36 +00:00
CT
0e917e319b docs: Add langchainhub to pip install (#20185)
Added langchainhub package in import statement which is required for
"from langchain import hub" to work.

Added sample code to add OpenAI key

Co-authored-by: Chi Yan Tang <100466443+poochiekittie@users.noreply.github.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-27 02:21:40 +00:00
Pamela Fox
45092a36a2 docs: Fix langgraph link (#20244)
Just a simple PR to fix a broken link. Apparently having backticks
outside a link makes it render as code.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-27 02:18:52 +00:00
Chip Davis
e818c75f8a infra: test directory loader multithreaded (#20281)
This is a unit test for #20230 which was a fix for using multithreaded
mode with directory loader @eyurtsev
2024-04-26 19:16:47 -07:00
Guilherme Zanotelli
f931a9ce60 community[patch]: Pass kwargs to SPARQLStore from RdfGraph (#20385)
This introduces `store_kwargs` which behaves similarly to `graph_kwargs`
on the `RdfGraph` object, which will enable users to pass `headers` and
other arguments to the underlying `SPARQLStore` object. I have also made
a [PR in `rdflib` to support passing
`default_graph`](https://github.com/RDFLib/rdflib/pull/2761).

Example usage:
```python
from langchain_community.graphs import RdfGraph

graph = RdfGraph(
    query_endpoint="http://localhost/sparql",
    standard="rdf",
    store_kwargs=dict(
        default_graph="http://example.com/mygraph"
    )
)
```

<!--If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.-->

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-27 01:38:29 +00:00
Chandre Van Der Westhuizen
e57cf73cf5 docs: Added MindsDB provider (#20322)
MindsDB integrates with LangChain, enabling users to deploy, serve, and
fine-tune models available via LangChain within MindsDB, making them
accessible to numerous data sources.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-27 01:36:08 +00:00
Jorge Piedrahita Ortiz
40b2e2916b community[minor]: Sambanova llm integration (#20955)
- **Description:** Added [Sambanova systems](https://sambanova.ai/)
integration, including sambaverse and sambastudio LLMs
- **Dependencies:**   sseclient-py  (optional)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-27 01:05:13 +00:00
Rahul Triptahi
955cf186d2 community[patch]: Ingest source, owner and full_path if present in Document's metadata. (#20949)
Description: The PebbloSafeLoader should first check for owner,
full_path and size in metadata before implementing its own logic.
Dependencies: None
Documentation: NA.

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-04-26 17:50:57 -07:00
Amine Djeghri
790ea75cf7 community[minor]: add exllamav2 library for GPTQ & EXL2 models (#17817)
Added 3 files : 
- Library : ExLlamaV2 
- Test integration
- Notebook

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-27 00:44:43 +00:00
Naveen Tatikonda
8bbdb4f6a0 community[patch]: Add OpenSearch as semantic cache (#20254)
### Description
Use OpenSearch vector store as Semantic Cache.

### Twitter Handle
**@OpenSearchProj**

---------

Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
Co-authored-by: Harish Tatikonda <harishtatikonda@Harishs-MacBook-Air.local>
Co-authored-by: EC2 Default User <ec2-user@ip-172-31-31-155.ec2.internal>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-27 00:20:24 +00:00
Giacomo Berardi
61f14f00d7 docs: ElasticsearchCache in cache integrations documentation (#20790)
The package for LangChain integrations with Elasticsearch
https://github.com/langchain-ai/langchain-elastic is going to contain a
LLM cache integration in the next release (see
https://github.com/langchain-ai/langchain-elastic/pull/14). This is the
documentation contribution on the page dedicated to cache integrations
2024-04-26 15:43:58 -07:00
Mayank Solanki
8c085fc697 community[patch]: Added a function from_existing_collection in Qdrant vector database. (#20779)
Issue: #20514 
The current implementation of `construct_instance` expects a `texts:
List[str]` that will call the embedding function. This might not be
needed when we already have a client with collection and `path, you
don't want to add any text.

This PR adds a class method that returns a qdrant instance with an
existing client.

Here everytime
cb6e5e56c2/libs/community/langchain_community/vectorstores/qdrant.py (L1592)
`construct_instance` is called, this line sends some text for embedding
generation.

---------

Co-authored-by: Anush <anushshetty90@gmail.com>
2024-04-26 15:34:09 -07:00
Leonid Kuligin
893a924b90 core[minor], community[patch], langchain[patch]: move BaseChatLoader to core (#19607)
Thank you for contributing to LangChain!

- [ ] **PR title**: "core: move BaseChatLoader and BaseToolkit from
community"


- [ ] **PR message**: move BaseChatLoader and BaseToolkit

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-26 21:45:51 +00:00
Erick Friis
d4befd0cfb core: fix batch ordering test (#20952) 2024-04-26 21:17:26 +00:00
Eugene Yurtsev
8ed150b2fe cli[minor]: Fix bug to account for name changes (#20948)
* Fix bug to account for name changes / aliases
* Generate migration list from langchain to langchain_core
2024-04-26 15:45:11 -04:00
ccurme
989e4a92c2 (infra) pass input to test-release (#20947) 2024-04-26 15:17:40 -04:00
Eugene Yurtsev
2fa0ff1a2d cli[minor]: update code to generate migrations from langchain to community (#20946)
Updates code that generates migrations from langchain to community
2024-04-26 15:11:32 -04:00
Erick Friis
078c5d9bc6 infra: nonmaster release checkbox (#20945)
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-04-26 14:50:07 -04:00
Leonid Kuligin
d4aec8fc8f docs: adding langchain_google_community to the docs (#20665)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: step1. adjusting langchain_community ->
langchain_google_community"


- [ ] 
- **Description:** step1. adjusting langchain_community ->
langchain_google_community
2024-04-26 18:49:03 +00:00
ccurme
bf16cefd18 langchain: deprecate create_structured_output_runnable (#20933) 2024-04-26 14:00:40 -04:00
Erick Friis
38eccab3ae upstage: release 0.1.3 (#20941) 2024-04-26 10:36:11 -07:00
Sean
e1c2e2fdfa upstage: Upstage Groundedness Check parameter update (#20914)
* Groundedness Check takes `str` or `list[Document]` as input.

* Deprecate `GroundednessCheck` due to its naming.
* Added `UpstageGroundednessCheck`. 

* Hotfix for Groundedness Check parameter. 
  The name `query` was misleading and it should be `answer` instead.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-26 17:34:05 +00:00
ccurme
84b8e67c9c mistral: release 0.1.4 (#20940) 2024-04-26 13:06:02 -04:00
ccurme
465fbaa30b openai: release 0.1.4 (#20939) 2024-04-26 09:56:49 -07:00
Eugene Yurtsev
12c906f6ce cli[minor]: Improve partner migrations (#20938)
This auto generates partner migrations.

At the moment the migration is from community -> partner.

So one would need to run the migration script twice to go from langchain to partner.
2024-04-26 12:30:15 -04:00
Eugene Yurtsev
5653f36adc cli[minor]: Add script to generate migrations for partner packages (#20932)
Add script to help generate migrations.

This works well for partner packages. Migrations are generated based on run time rather than static analysis (much simpler to get the correct migrations implemented).

The script for generating migrations from langchain to community still needs work.
2024-04-26 11:17:20 -04:00
ccurme
fe1304afc4 openai: add unit test (#20931)
Test a helper function that was added earlier.
2024-04-26 15:02:19 +00:00
Eugene Yurtsev
6598757037 cli[minor]: Add first version of migrate (#20902)
Adds a first version of the migrate script.
2024-04-26 10:50:21 -04:00
Pengcheng Liu
d95e9fb67f docs: add tool calling example in Tongyi chat model integration. (#20925)
**Description:** add tool calling example in Tongyi chat model
integration.
  **Issue:** None
  **Dependencies:** None
2024-04-26 10:18:54 -04:00
Lei Zhang
9281841cfe community[patch]: fix integrated test case test_recursive_url_loader.py assertions (issue-20919) (#20920)
**Description:** 
Fix integrated test case test_recursive_url_loader.py

Local testing successful

```shell
(venv) lei@LeideMacBook-Pro community % poetry run pytest tests/integration_tests/document_loaders/test_recursive_url_loader.py
================================================================================ test session starts ================================================================================
platform darwin -- Python 3.11.4, pytest-7.4.4, pluggy-1.4.0 -- /Users/zhanglei/Work/github/langchain/venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/zhanglei/Work/github/langchain/libs/community
configfile: pyproject.toml
plugins: syrupy-4.6.1, asyncio-0.20.3, cov-4.1.0, vcr-1.0.2, mock-3.12.0, anyio-3.7.1, dotenv-0.5.2, requests-mock-1.11.0, socket-0.6.0
asyncio: mode=Mode.AUTO
collected 6 items                                                                                                                                                                   

tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader PASSED                                                                 [ 16%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic PASSED                                                   [ 33%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader FAILED                                                                  [ 50%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent PASSED                                                                      [ 66%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_loading_invalid_url PASSED                                                                        [ 83%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties PASSED                                                   [100%]

===================================================================================== FAILURES ======================================================================================
__________________________________________________________________________ test_sync_recursive_url_loader ___________________________________________________________________________

    def test_sync_recursive_url_loader() -> None:
        url = "https://docs.python.org/3.9/"
        loader = RecursiveUrlLoader(
            url, extractor=lambda _: "placeholder", use_async=False, max_depth=2
        )
        docs = loader.load()
>       assert len(docs) == 23
E       AssertionError: assert 24 == 23
E        +  where 24 = len([Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/', 'content_type': 'text/html', 'title': '3.9.18 Documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/py-modindex.html', 'content_type': 'text/html', 'title': 'Python Module Index — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/download.html', 'content_type': 'text/html', 'title': 'Download — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/howto/index.html', 'content_type': 'text/html', 'title': 'Python HOWTOs — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/whatsnew/index.html', 'content_type': 'text/html', 'title': 'Whatâ\x80\x99s New in Python — Python 3.9.18 documentation', 'language': None}), Document(page_content='placeholder', metadata={'source': 'https://docs.python.org/3.9/c-api/index.html', 'content_type': 'text/html', 'title': 'Python/C API Reference Manual — Python 3.9.18 documentation', 'language': None}), ...])

tests/integration_tests/document_loaders/test_recursive_url_loader.py:38: AssertionError
================================================================================= warnings summary ==================================================================================
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
  /Users/zhanglei/.pyenv/versions/3.11.4/lib/python3.11/html/parser.py:170: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features="xml"` into the BeautifulSoup constructor.
    k = self.parse_starttag(i)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================ slowest 5 durations ================================================================================
56.75s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
38.99s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
31.20s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
30.37s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
15.44s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
============================================================================== short test summary info ==============================================================================
FAILED tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader - AssertionError: assert 24 == 23
================================================================ 1 failed, 5 passed, 5 warnings in 172.97s (0:02:52) ================================================================
(venv) zhanglei@LeideMacBook-Pro community % poetry run pytest tests/integration_tests/document_loaders/test_recursive_url_loader.py
================================================================================ test session starts ================================================================================
platform darwin -- Python 3.11.4, pytest-7.4.4, pluggy-1.4.0 -- /Users/zhanglei/Work/github/langchain/venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/zhanglei/Work/github/langchain/libs/community
configfile: pyproject.toml
plugins: syrupy-4.6.1, asyncio-0.20.3, cov-4.1.0, vcr-1.0.2, mock-3.12.0, anyio-3.7.1, dotenv-0.5.2, requests-mock-1.11.0, socket-0.6.0
asyncio: mode=Mode.AUTO
collected 6 items                                                                                                                                                                   

tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader PASSED                                                                 [ 16%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic PASSED                                                   [ 33%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader PASSED                                                                  [ 50%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent PASSED                                                                      [ 66%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_loading_invalid_url PASSED                                                                        [ 83%]
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties PASSED                                                   [100%]

================================================================================= warnings summary ==================================================================================
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
  /Users/zhanglei/.pyenv/versions/3.11.4/lib/python3.11/html/parser.py:170: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features="xml"` into the BeautifulSoup constructor.
    k = self.parse_starttag(i)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================ slowest 5 durations ================================================================================
46.99s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader_deterministic
32.43s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_async_recursive_url_loader
31.23s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_equivalent
30.75s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_async_metadata_necessary_properties
15.89s call     tests/integration_tests/document_loaders/test_recursive_url_loader.py::test_sync_recursive_url_loader
===================================================================== 6 passed, 5 warnings in 157.42s (0:02:37) =====================================================================
(venv) lei@LeideMacBook-Pro community % 
```

**Issue:** https://github.com/langchain-ai/langchain/issues/20919

**Twitter handle:** @coolbeevip
2024-04-26 10:00:08 -04:00
ccurme
7d8d0229fa remove placeholder error message (#20340) 2024-04-26 13:48:48 +00:00
William FH
4c437ebb9c Use lstv2 (#20747) 2024-04-25 16:51:42 -07:00
ccurme
891ae37437 langchain: support PineconeVectorStore in self query retriever (#20905)
`langchain_pinecone.Pinecone` is deprecated in favor of
`PineconeVectorStore`, and is currently a subclass of
`PineconeVectorStore`.
```python
@deprecated(since="0.0.3", removal="0.2.0", alternative="PineconeVectorStore")
class Pinecone(PineconeVectorStore):
    """Deprecated. Use PineconeVectorStore instead."""

    pass
```
2024-04-25 20:54:58 +00:00
Matt
28df4750ef community[patch]: Add initial tests for AzureSearch vector store (#17663)
**Description:** AzureSearch vector store has no tests. This PR adds
initial tests to validate the code can be imported and used.
**Issue:** N/A
**Dependencies:** azure-search-documents and azure-identity are added as
optional dependencies for testing

---------

Co-authored-by: Matt Gotteiner <[email protected]>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 20:42:01 +00:00
Dristy Srivastava
5f1d1666e3 community[patch]: Add support for pebblo server and client version (#20269)
**Description**:
_PebbloSafeLoader_: Add support for pebblo server and client version


**Documentation:** NA
**Unit test:** NA
**Issue:** NA
**Dependencies:**  None

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 20:39:17 +00:00
am-kinetica
b54b19ba1c community[minor]: Implemented Kinetica Document Loader and added notebooks (#20002)
- [ ] **Kinetica Document Loader**: "community: a class to load
Documents from Kinetica"



- [ ] **Kinetica Document Loader**: 
- **Description:** implemented KineticaLoader in `kinetica_loader.py`
- **Dependencies:** install the Kinetica API using `pip install
gpudb==7.2.0.1 `
2024-04-25 13:39:00 -07:00
Michael Schock
5e60d65917 experimental[patch]: return from HuggingGPT task executor task.run() exception (#20219)
**Description:** Fixes a bug in the HuggingGPT task execution logic
here:

      except Exception as e:
          self.status = "failed"
          self.message = str(e)
      self.status = "completed"
      self.save_product()

where a caught exception effectively just sets `self.message` and can
then throw an exception if, e.g., `self.product` is not defined.

**Issue:** None that I'm aware of.
**Dependencies:** None
**Twitter handle:** https://twitter.com/michaeljschock

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 20:16:39 +00:00
Anish Chakraborty
898362de81 core[patch]: improve comma separated list output parser to handle non-space separated list (#20434)
- **Description:** Changes
`lanchain_core.output_parsers.CommaSeparatedListOutputParser` to handle
`,` as a delimiter alongside the previous implementation which used `, `
as delimiter.
- **Issue:** Started noticing that some results returned by LLMs were
not getting parsed correctly when the output contained `,` instead of `,
`.
  - **Dependencies:** No
  - **Twitter handle:** not active on twitter.


<!---
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
-->
2024-04-25 20:10:56 +00:00
Michael Schock
63a07f52df experimental[patch]: remove \n from AutoGPT feedback_tool exit check (#20132) 2024-04-25 20:10:33 +00:00
Shengsheng Huang
fd1061e7bf community[patch]: add more data types support to ipex-llm llm integration (#20833)
- **Description**:  
- **add support for more data types**: by default `IpexLLM` will load
the model in int4 format. This PR adds more data types support such as
`sym_in5`, `sym_int8`, etc. Data formats like NF3, NF4, FP4 and FP8 are
only supported on GPU and will be added in future PR.
    - Fix a small issue in saving/loading, update api docs
- **Dependencies**: `ipex-llm` library
- **Document**: In `docs/docs/integrations/llms/ipex_llm.ipynb`, added
instructions for saving/loading low-bit model.
- **Tests**: added new test cases to
`libs/community/tests/integration_tests/llms/test_ipex_llm.py`, added
config params.
- **Contribution maintainer**: @shane-huang
2024-04-25 12:58:18 -07:00
Rahul Triptahi
dc921f0823 community[patch]: Add semantic info to metadata, classified by pebblo-server. (#20468)
Description: Add support for Semantic topics and entities.
Classification done by pebblo-server is not used to enhance metadata of
Documents loaded by document loaders.
Dependencies: None
Documentation: Updated.

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-04-25 12:55:33 -07:00
Eugene Yurtsev
a5028b6356 cli[minor]: Add __version__ (#20903)
Add __version__ to cli
2024-04-25 15:51:33 -04:00
Jingpan Xiong
1202017c56 community[minor]: Add relyt vector database (#20316)
Co-authored-by: kaka <kaka@zbyte-inc.cloud>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: jingsi <jingsi@leadincloud.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 19:49:29 +00:00
davidefantiniIntel
f386f71bb3 community: fix tqdm import (#20263)
Description: Fix tqdm import in QuantizedBiEncoderEmbeddings
2024-04-25 19:44:53 +00:00
Andres Algaba
05ae8ca7d4 community[patch]: deprecate persist method in Chroma (#20855)
Thank you for contributing to LangChain!

- [x] **PR title**

- [x] **PR message**:
- **Description:** Deprecate persist method in Chroma no longer exists
in Chroma 0.4.x
    - **Issue:** #20851 
    - **Dependencies:** None
    - **Twitter handle:** AndresAlgaba1

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 19:42:03 +00:00
ccurme
fdabd3cdf5 mistral, openai: support custom tokenizers in chat models (#20901) 2024-04-25 15:23:29 -04:00
ccurme
6986e44959 docs: update chat model feature table (#20899) 2024-04-25 15:05:43 -04:00
ccurme
b8db73233c core, community: deprecate tool.__call__ (#20900)
Does not update docs.
2024-04-25 14:50:39 -04:00
merdan
52896258ee docs: hide model import in multiple_tools.ipynb (#20883)
**Description:** 
This PR removes an unnecessary code snippet from the documentation. The
snippet in question is not relevant to the content and does not
contribute to the overall understanding of the topic. It contained
redundant imports and unused code, potentially causing confusion for
readers.

**Issue:** 
There is no specific issue number associated with this change.

**Dependencies:** 
No additional dependencies are required for this change.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 18:47:22 +00:00
Tomaz Bratanic
520972fd0f community[patch]: Support passing graph object to Neo4j integrations (#20876)
For driver connection reusage, we introduce passing the graph object to
neo4j integrations
2024-04-25 11:30:22 -07:00
Lei Zhang
748a6ae609 community[patch]: add HTTP response headers Content-Type to metadata of RecursiveUrlLoader document (#20875)
**Description:** 
The RecursiveUrlLoader loader offers a link_regex parameter that can
filter out URLs. However, this filtering capability is limited, and if
the internal links of the website change, unexpected resources may be
loaded. These resources, such as font files, can cause problems in
subsequent embedding processing.

>
https://blog.langchain.dev/assets/fonts/source-sans-pro-v21-latin-ext_latin-regular.woff2?v=0312715cbf

We can add the Content-Type in the HTTP response headers to the document
metadata so developers can choose which resources to use. This allows
developers to make their own choices.

For example, the following may be a good choice for text knowledge.

- text/plain - simple text file
- text/html - HTML web page
- text/xml - XML format file
- text/json - JSON format data
- application/pdf - PDF file
- application/msword - Word document

and ignore the following

- text/css - CSS stylesheet
- text/javascript - JavaScript script
- application/octet-stream - binary data
- image/jpeg - JPEG image
- image/png - PNG image
- image/gif - GIF image
- image/svg+xml - SVG image
- audio/mpeg - MPEG audio files
- video/mp4 - MP4 video file
- application/font-woff - WOFF font file
- application/font-ttf - TTF font file
- application/zip - ZIP compressed file
- application/octet-stream - binary data

**Twitter handle:** @coolbeevip

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 11:29:41 -07:00
samanhappy
37cbbc00a9 docs: Fix broken link in agents.ipynb (#20872) 2024-04-25 10:42:06 -07:00
fzowl
a6b8ff23bd docs: Use voyage-law-2 in the examples (#20784)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


**Description:** In VoyageAI text-embedding examples use voyage-law-2
model


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-25 10:41:36 -07:00
Erick Friis
eca3640af7 upstage: release 0.1.2 (#20898) 2024-04-25 10:41:19 -07:00
Pavlo Paliychuk
82b5bdc7a1 docs: Fix misplaced zep cloud example links (#20867)
Thank you for contributing to LangChain!

- [x] **PR title**: Fix misplaced zep cloud example links
- [x] **PR message**: 
- **Description:** Fixes misplaced links for vector store and memory zep
cloud examples

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-25 10:41:08 -07:00
Joan Fontanals
baefbfb14e community[mionr]: add Jina Reranker in retrievers module (#19406)
- **Description:** Adapt JinaEmbeddings to run with the new Jina AI
Rerank API
- **Twitter handle:** https://twitter.com/JinaAI_


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 10:27:10 -07:00
Erick Friis
92969d49cb multiple: remove external repo mds (#20896)
api docs build doesn't tolerate them
2024-04-25 17:18:29 +00:00
Jason_Chen
53bb7dbd29 community[patch]: add BeautifulSoupTransformer remove_unwanted_classnames method (#20467)
Add the remove_unwanted_classnames method to the
BeautifulSoupTransformer class, which can filter more effectively.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 17:04:04 +00:00
YISH
ed26149a29 openai[patch]: Allow disablling safe_len_embeddings(OpenAIEmbeddings) (#19743)
OpenAI API compatible server may not support `safe_len_embedding`, 

use `disable_safe_len_embeddings=True` to disable it.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 09:45:52 -07:00
Bagatur
5b83130855 core[minor], langchain[patch], community[patch]: mv StructuredQuery (#20849)
mv StructuredQuery to core
2024-04-25 09:40:26 -07:00
Sean
540f384197 partner: Upstage quick documentation update (#20869)
* Updating the provider docs page. 
The RAG example was meant to be moved to cookbook, but was merged by
mistake.

* Fix bug in Groundedness Check

---------

Co-authored-by: JuHyung-Son <sonju0427@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-25 16:36:54 +00:00
Bagatur
ffad3985a1 core[patch]: Release 0.1.46 (#20891) 2024-04-25 15:40:17 +00:00
Mish Ushakov
6ccecf2363 community[minor]: added Browserbase loader (#20478) 2024-04-25 01:11:03 +00:00
aditya thomas
9e694963a4 docs: custom callback handlers page (#20494)
**Description:** Update to the Callbacks page on custom callback
handlers
**Issue:** #20493 
**Dependencies:** None

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 01:08:36 +00:00
Erick Friis
5da9dd1195 mistral: comment batching param (#20868)
Addresses #20523
2024-04-25 00:38:21 +00:00
Ivaylo Bratoev
7c5063ef60 infra: fix how Poetry is installed in the dev container (#20521)
Currently, when a new dev container is created, poetry does not work in
it with the error "No module named 'rapidfuzz'".

Install Poetry outside the project venv so that poetry and project
dependencies do not get mixed. Use pipx to install poetry securely in
its own isolated environment.

Issue: #12237

Twitter handle: https://twitter.com/ibratoev

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 17:33:25 -07:00
GustavoSept
c2d09a5186 experimental[patch]: Makes regex customizable in text_splitter.py (SemanticChunker class) (#20485)
- **Description:** Currently, the regex is static (`r"(?<=[.?!])\s+"`),
which is only useful for certain use cases. The current change only
moves this to be a parameter of split_text(). Which adds flexibility
without making it more complex (as the default regex is still the same).
- **Issue:** Not applicable (I searched, no one seems to have created
this issue yet).
  - **Dependencies:** None.


_If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17._

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-25 00:32:40 +00:00
William FH
a936f696a6 [Core] Feat: update config CVar in tool.invoke (#20808) 2024-04-24 17:17:21 -07:00
Lei Zhang
2cd907ad7e text-splitters[patch]: fix MarkdownHeaderTextSplitter fails to parse headers with non-printable characters (#20645)
Description: MarkdownHeaderTextSplitter Fails to Parse Headers with
non-printable characters. more #20643

The following is the official test case. Just replacing `# Foo\n\n` with
`\ufeff# Foo\n\n` will cause the test case to fail.

chunk metadata is empty

```python
def test_md_header_text_splitter_1() -> None:
    """Test markdown splitter by header: Case 1."""

    markdown_document = (
        "\ufeff# Foo\n\n"
        "    ## Bar\n\n"
        "Hi this is Jim\n\n"
        "Hi this is Joe\n\n"
        " ## Baz\n\n"
        " Hi this is Molly"
    )
    headers_to_split_on = [
        ("#", "Header 1"),
        ("##", "Header 2"),
    ]
    markdown_splitter = MarkdownHeaderTextSplitter(
        headers_to_split_on=headers_to_split_on,
    )
    output = markdown_splitter.split_text(markdown_document)
    expected_output = [
        Document(
            page_content="Hi this is Jim  \nHi this is Joe",
            metadata={"Header 1": "Foo", "Header 2": "Bar"},
        ),
        Document(
            page_content="Hi this is Molly",
            metadata={"Header 1": "Foo", "Header 2": "Baz"},
        ),
    ]
    assert output == expected_output
```

twitter: @coolbeevip

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-25 00:07:42 +00:00
jtanios
2968f20970 docs: git dependency name correction (#20662)
This PR corrects the name of the `git` python package to `GitPython`.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 23:43:44 +00:00
ccurme
481d3855dc patch: remove usage of llm, chat model __call__ (#20788)
- `llm(prompt)` -> `llm.invoke(prompt)`
- `llm(prompt=prompt` -> `llm.invoke(prompt)` (same with `messages=`)
- `llm(prompt, callbacks=callbacks)` -> `llm.invoke(prompt,
config={"callbacks": callbacks})`
- `llm(prompt, **kwargs)` -> `llm.invoke(prompt, **kwargs)`
2024-04-24 19:39:23 -04:00
Raghav Dixit
9b7fb381a4 community[patch]: LanceDB integration patch update (#20686)
Description : 

- added functionalities - delete, index creation, using existing
connection object etc.
- updated usage 
- Added LaceDB cloud OSS support

make lint_diff , make test checks done
2024-04-24 16:27:43 -07:00
Nikita Pokidyshev
9e983c9500 langchain[patch]: fix agent_token_buffer_memory not working with openai tools (#20708)
- **Description:** fix a bug in the agent_token_buffer_memory
- **Issue:** agent_token_buffer_memory was not working with openai tools
- **Dependencies:** None
- **Twitter handle:** @pokidyshef
2024-04-24 15:51:58 -07:00
Salika Dave
6353991498 docs: [Retrieval > .. > PDF] update package installation instructions for Unstructured and PDFMiner (#20723)
**Description:** Adds the command to install packages required before
using _Unstructured_ and _PDFMiner_ from `langchain.community`
**Documentation Page Being Updated:** [LangChain > Retrieval > Document
loaders > PDF > Using
Unstructured](https://python.langchain.com/docs/modules/data_connection/document_loaders/pdf/#using-unstructured)
**Issue:** #20719 
**Dependencies:** no dependencies
**Twitter handle:** SalikaDave

<!--
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17. -->

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 22:24:11 +00:00
dpdjvhxm
a9e2e98708 docs: Update apache_age.ipynb (#20722)
typo
2024-04-24 22:18:59 +00:00
Erick Friis
1aef8116de upstage: release 0.1.1 (#20864) 2024-04-24 15:18:30 -07:00
junkeon
c8fd51e8c8 upstage: Add Upstage partner package LA and GC (#20651)
---------

Co-authored-by: Sean <chosh0615@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Sean Cho <sean@upstage.ai>
2024-04-24 15:17:20 -07:00
hsmtkk
5ecebf168c docs: imported List is not used (#20720)
# Description

Minor sample code fix

# Issue

Imported `List` is not used.

# Dependencies

N/A

# Twitter handle

N/A
2024-04-24 15:17:07 -07:00
Alex Lee
243ba71b28 langchain[patch]: add aprep_output method to langchain/chains/base.py (#20748)
## Description

Add `aprep_output` method to `langchain/chains/base.py`. Some downstream
`ChatMessageHistory` objects that use async connections require an async
way to append to the context.

It turned out that `ainvoke()` was calling `prep_output` which is
synchronous.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 22:16:25 +00:00
Harrison Chase
43c041cda5 support messages in messages out (#20862) 2024-04-24 14:58:58 -07:00
back2nix
a1614b88ac groq[patch]: groq proxy support (#20758)
# Proxy Fix for Groq Class 🐛 🚀

## Description
This PR fixes a bug related to proxy settings in the `Groq` class,
allowing users to connect to LangChain services via a proxy.

## Changes Made
-  FIX support for specifying proxy settings in the `Groq` class.
-  Resolved the bug causing issues with proxy settings.
-  Did not include unit tests and documentation updates.
-  Did not run make format, make lint, and make test to ensure code
quality and functionality because I couldn't get it to run, so I don't
program in Python and couldn't run `ruff`.
-  Ensured that the changes are backwards compatible.
-  No additional dependencies were added to `pyproject.toml`.

### Error Before Fix
```python
Traceback (most recent call last):
  File "/home/bg/Documents/code/github.com/back2nix/test/groq/main.py", line 9, in <module>
    chat = ChatGroq(
           ^^^^^^^^^
  File "/home/bg/Documents/code/github.com/back2nix/test/groq/venv310/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/home/bg/Documents/code/github.com/back2nix/test/groq/venv310/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ChatGroq
__root__
  Invalid `http_client` argument; Expected an instance of `httpx.AsyncClient` but got <class 'httpx.Client'> (type=type_error)
  ```
  
### Example usage after fix
  ```python3
import os

import httpx
from langchain_core.prompts import ChatPromptTemplate
from langchain_groq import ChatGroq

chat = ChatGroq(
    temperature=0,
    groq_api_key=os.environ.get("GROQ_API_KEY"),
    model_name="mixtral-8x7b-32768",
    http_client=httpx.Client(
        proxies="socks5://127.0.0.1:1080",
        transport=httpx.HTTPTransport(local_address="0.0.0.0"),
    ),
    http_async_client=httpx.AsyncClient(
        proxies="socks5://127.0.0.1:1080",
        transport=httpx.HTTPTransport(local_address="0.0.0.0"),
    ),
)

system = "You are a helpful assistant."
human = "{text}"
prompt = ChatPromptTemplate.from_messages([("system", system), ("human", human)])

chain = prompt | chat
out = chain.invoke({"text": "Explain the importance of low latency LLMs"})

print(out)
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-24 21:58:03 +00:00
volodymyr-memsql
493afe4d8d community[patch]: add hybrid search to singlestoredb vectorstore (#20793)
Implemented the ability to enable full-text search within the
SingleStore vector store, offering users a versatile range of search
strategies. This enhancement allows users to seamlessly combine
full-text search with vector search, enabling the following search
strategies:

* Search solely by vector similarity.
* Conduct searches exclusively based on text similarity, utilizing
Lucene internally.
* Filter search results by text similarity score, with the option to
specify a threshold, followed by a search based on vector similarity.
* Filter results by vector similarity score before conducting a search
based on text similarity.
* Perform searches using a weighted sum of vector and text similarity
scores.

Additionally, integration tests have been added to comprehensively cover
all scenarios.
Updated notebook with examples.

CC: @baskaryan, @hwchase17

---------

Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-24 21:34:50 +00:00
Tomaz Bratanic
9efab3ed66 community[patch]: Add driver config param for neo4j graph (#20772)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-24 21:14:41 +00:00
Leonid Ganeline
13751c3297 community: tigergraph fixes (#20034)
- added guard on the `pyTigerGraph` import
- added a missed example page in the `docs/integrations/graphs/`
- formatted the `docs/integrations/providers/` page to the consistent
format. Added links.
2024-04-24 16:49:21 -04:00
Martin Kolb
0186e4e633 community[patch]: Advanced filtering for HANA Cloud Vector Engine (#20821)
- **Description:**
This PR adds support for advanced filtering to the integration of HANA
Vector Engine.
The newly supported filtering operators are: $eq, $ne, $gt, $gte, $lt,
$lte, $between, $in, $nin, $like, $and, $or

  - **Issue:** N/A
  - **Dependencies:** no new dependencies added

Added integration tests to:
`libs/community/tests/integration_tests/vectorstores/test_hanavector.py`

Description of the new capabilities in notebook:
`docs/docs/integrations/vectorstores/hanavector.ipynb`
2024-04-24 13:47:27 -07:00
Alex Sherstinsky
12e5ec6de3 community: Support both Predibase SDK-v1 and SDK-v2 in Predibase-LangChain integration (#20859) 2024-04-24 13:31:01 -07:00
Erick Friis
8c95ac3145 docs, multiple: de-beta with_structured_output (#20850) 2024-04-24 19:34:57 +00:00
Nuno Campos
477eb1745c Better support for subgraphs in graph viz (#20840) 2024-04-24 12:32:52 -07:00
aditya thomas
a9c7d47c03 docs: update openai llm documentation (#20827)
**Description:** Bring OpenAI LLM page to the LCEL era
**Issue:** See discussion #20810
**Dependencies:** None
2024-04-24 12:26:57 -07:00
JeffKatzy
5ab3f9a995 community[patch]: standardize chat init args (#20844)
Thank you for contributing to LangChain!

community:perplexity[patch]: standardize init args

updated pplx_api_key and request_timeout so that aliased to api_key, and
timeout respectively. Added test that both continue to set the same
underlying attributes.

Related to
[20085](https://github.com/langchain-ai/langchain/issues/20085)

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 12:26:05 -07:00
Pavlo Paliychuk
70ae59bcfe docs: Update Zep Messaging, add links to Zep Cloud Docs (#20848)
Thank you for contributing to LangChain!

- [x] **PR title**: docs: Update Zep Messaging, add links to Zep Cloud
Docs

- [x] **PR message**: 
- **Description:** This PR updates Zep messaging in the docs + links to
Langchain Zep Cloud examples in our documentation
    - **Twitter handle:** @paulpaliychuk51


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-24 19:14:54 +00:00
Massimiliano Pronesti
8d1167b32f community[patch]: add support for similarity_score_threshold search in… (#20852)
See
https://github.com/langchain-ai/langchain/issues/20600#issuecomment-2075569338
for details.

@chrislrobert
2024-04-24 19:14:33 +00:00
Bagatur
87d31a3ec0 docs: contributing note (#20843) 2024-04-24 10:41:19 -07:00
Eugene Yurtsev
d8aa72f51d core[minor],langchain[patch]: Move base indexing interface and logic to core (#20667)
This PR moves the interface and the logic to core.

The following changes to namespaces:


`indexes` -> `indexing`
`indexes._api` -> `indexing.api`


Testing code is intentionally duplicated for now since it's testing
different
implementations of the record manager (in-memory vs. SQL).

Common logic will need to be pulled out into the test client.


A follow up PR will move the SQL based implementation outside of
LangChain.
2024-04-24 13:18:42 -04:00
ccurme
3bcfbcc871 groq: handle null queue_time (#20839) 2024-04-24 09:50:09 -07:00
Eugene Yurtsev
30e48c9878 core[patch],community[patch]: Move file chat history back to community (#20834)
Marking as patch since we haven't had releases in between. This just reverting part of a PR from yesterday.
2024-04-24 12:47:25 -04:00
ccurme
6debadaa70 groq: bump core (#20838) 2024-04-24 11:51:46 -04:00
Erick Friis
7984206c95 groq: release 0.1.3 (#20836)
Fixes #20811
2024-04-24 08:06:06 -07:00
Nestor Qin
9111d3a636 community[patch]: Fix message formatting for Anthropic models on Amazon Bedrock (#20801)
**Description:**
This PR fixes an issue in message formatting function for Anthropic
models on Amazon Bedrock.

Currently, LangChain BedrockChat model will crash if it uses Anthropic
models and the model return a message in the following type:
- `AIMessageChunk`

Moreover, when use BedrockChat with for building Agent, the following
message types will trigger the same issue too:
- `HumanMessageChunk`
- `FunctionMessage`

**Issue:**
https://github.com/langchain-ai/langchain/issues/18831

**Dependencies:**
No.

**Testing:**
Manually tested. The following code was failing before the patch and
works after.

```
@tool
def square_root(x: str):
    "Useful when you need to calculate the square root of a number"
    return math.sqrt(int(x))

llm = ChatBedrock(
    model_id="anthropic.claude-3-sonnet-20240229-v1:0",
    model_kwargs={ "temperature": 0.0 },
)

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", FUNCTION_CALL_PROMPT),
        ("human", "Question: {user_input}"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ]
)

tools = [square_root]
tools_string = format_tool_to_anthropic_function(square_root)

agent = (
        RunnablePassthrough.assign(
            user_input=lambda x: x['user_input'],
            agent_scratchpad=lambda x: format_to_openai_function_messages(
                x["intermediate_steps"]
            )
        )
        | prompt
        | llm
        | AnthropicFunctionsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, return_intermediate_steps=True)
output = agent_executor.invoke({
    "user_input": "What is the square root of 2?",
    "tools_string": tools_string,
})
```
List of messages returned from Bedrock:
```
<SystemMessage> content='You are a helpful assistant.'
<HumanMessage> content='Question: What is the square root of 2?'
<AIMessageChunk> content="Okay, let's calculate the square root of 2.<scratchpad>\nTo calculate the square root of a number, I can use the square_root tool:\n\n<function_calls>\n  <invoke>\n    <tool_name>square_root</tool_name>\n    <parameters>\n      <__arg1>2</__arg1>\n    </parameters>\n  </invoke>\n</function_calls>\n</scratchpad>\n\n<function_results>\n<search_result>\nThe square root of 2 is approximately 1.414213562373095\n</search_result>\n</function_results>\n\n<answer>\nThe square root of 2 is approximately 1.414213562373095\n</answer>" id='run-92363df7-eff6-4849-bbba-fa16a1b2988c'"
<FunctionMessage> content='1.4142135623730951' name='square_root'
```
2024-04-23 22:40:39 +00:00
ccurme
06b04b80b8 groq: fix warning filter for integration test (#20806) 2024-04-23 18:11:41 -04:00
ccurme
5a3c65a756 standard tests: add xfails (#20659) 2024-04-23 17:14:16 -04:00
Erick Friis
ddc2274aea standard-tests: split tool calling test (#20803)
just making it a bit easier to grok
2024-04-23 20:59:45 +00:00
ccurme
6622829c67 mistral: catch GatedRepoError, release 0.1.3 (#20802)
https://github.com/langchain-ai/langchain/issues/20618

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-23 20:56:42 +00:00
Eugene Yurtsev
a7c347ab35 langchain[patch]: Update evaluation logic that instantiates a default LLM (#20760)
Favor langchain_openai over langchain_community for evaluation logic.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-04-23 16:09:32 -04:00
Eugene Yurtsev
72f720fa38 langchain[major]: Remove default instantations of LLMs from VectorstoreToolkit (#20794)
Remove default instantiation from vectorstore toolkit.
2024-04-23 16:09:14 -04:00
ccurme
42de5168b1 langchain: deprecate LLMChain, RetrievalQA, and ConversationalRetrievalChain (#20751) 2024-04-23 15:55:34 -04:00
Erick Friis
30c7951505 core: use qualname in beta message (#20361) 2024-04-23 11:20:13 -07:00
Aliaksandr Kuzmik
5560cc448c community[patch]: fix CometTracer bug (#20796)
Hi! My name is Alex, I'm an SDK engineer from
[Comet](https://www.comet.com/site/)

This PR updates the `CometTracer` class.

Fixed an issue when `CometTracer` failed while logging the data to Comet
because this data is not JSON-encodable.

The problem was in some of the `Run` attributes that could contain
non-default types inside, now these attributes are taken not from the
run instance, but from the `run.dict()` return value.
2024-04-23 13:24:41 -04:00
Eugene Yurtsev
1c89e45c14 langchain[major]: breaks some chains to remove hidden defaults (#20759)
Breaks some chains in langchain to remove hidden chat model / llm instantiation.
2024-04-23 11:11:40 -04:00
Eugene Yurtsev
ad6b5f84e5 community[patch],core[minor]: Move in memory cache implementation to core (#20753)
This PR moves the InMemoryCache implementation from community to core.
2024-04-23 11:10:11 -04:00
Stefano Ottolenghi
4f67ce485a docs: Fix typo to render list (#20774)
This _should_ fix the currently broken list in the [Neo4jVector
page](https://python.langchain.com/docs/integrations/vectorstores/neo4jvector/).

![Screenshot from 2024-04-23
08-40-37](https://github.com/langchain-ai/langchain/assets/114478074/ab5ad622-879e-4764-93db-5f502eae479b)
2024-04-23 14:46:58 +00:00
Eugene Yurtsev
a2cc9b55ba core[patch]: Remove autoupgrade to addable dict in Runnable/RunnableLambda/RunnablePassthrough transform (#20677)
Causes an issue for this code

```python
from langchain.chat_models.openai import ChatOpenAI
from langchain.output_parsers.openai_tools import JsonOutputToolsParser
from langchain.schema import SystemMessage

prompt = SystemMessage(content="You are a nice assistant.") + "{question}"

llm = ChatOpenAI(
    model_kwargs={
        "tools": [
            {
                "type": "function",
                "function": {
                    "name": "web_search",
                    "description": "Searches the web for the answer to the question.",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "query": {
                                "type": "string",
                                "description": "The question to search for.",
                            },
                        },
                    },
                },
            }
        ],
    },
    streaming=True,
)

parser = JsonOutputToolsParser(first_tool_only=True)

llm_chain = prompt | llm | parser | (lambda x: x)


for chunk in llm_chain.stream({"question": "tell me more about turtles"}):
    print(chunk)

# message = llm_chain.invoke({"question": "tell me more about turtles"})

# print(message)
```

Instead by definition, we'll assume that RunnableLambdas consume the
entire stream and that if the stream isn't addable then it's the last
message of the stream that's in the usable format.

---

If users want to use addable dicts, they can wrap the dict in an
AddableDict class.

---

Likely, need to follow up with the same change for other places in the
code that do the upgrade
2024-04-23 10:35:06 -04:00
Oleksandr Yaremchuk
9428923bab experimental[minor]: upgrade the prompt injection model (#20783)
- **Description:** In January, Laiyer.ai became part of ProtectAI, which
means the model became owned by ProtectAI. In addition to that,
yesterday, we released a new version of the model addressing issues the
Langchain's community and others mentioned to us about false-positives.
The new model has a better accuracy compared to the previous version,
and we thought the Langchain community would benefit from using the
[latest version of the
model](https://huggingface.co/protectai/deberta-v3-base-prompt-injection-v2).
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** @alex_yaremchuk
2024-04-23 10:23:39 -04:00
Eugene Yurtsev
645b1e142e core[minor],langchain[patch],community[patch]: Move InMemory and File implementations of Chat History to core (#20752)
This PR moves the implementations for chat history to core. So it's
easier to determine which dependencies need to be broken / add
deprecation warnings
2024-04-23 10:22:11 -04:00
ccurme
7a922f3e48 core, openai: support custom token encoders (#20762) 2024-04-23 13:57:05 +00:00
Chen94yue
b481b73805 Update custom_retriever.ipynb (#20776)
Fixed an error in the sample code to ensure that the code can run
directly.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-23 13:47:08 +00:00
Bagatur
ed980601e1 docs: update examples in api ref (#20768) 2024-04-23 00:47:52 +00:00
Bagatur
be51cd3bc9 docs: fix api ref link autogeneration (#20766) 2024-04-22 17:36:41 -07:00
monke111
c807f0a6dd Update google_drive.ipynb (#20731)
langchain_community.document_loaders depricated 
new langchain_google_community

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-22 23:30:46 +00:00
Katarina Supe
dc61e23886 docs: update Memgraph docs (#20736)
- **Description:** Memgraph Platform is being run differently now so I
updated this (I am DX engineer from Memgraph).
2024-04-22 19:27:12 -04:00
Tabish Mir
6a0d44d632 docs: Fix link for partition_pdf in Semi_Structured_RAG.ipynb cookbook (#20763)
docs: Fix link for `partition_pdf` in Semi_Structured_RAG.ipynb cookbook

- **Description:** Fix incorrect link to unstructured-io `partition_pdf`
section
2024-04-22 23:22:55 +00:00
Bagatur
fa4d6f9f8b docs: install partner pkgs vercel (#20761) 2024-04-22 23:08:02 +00:00
Christophe Bornet
0ae5027d98 community[patch]: Remove usage of deprecated StoredBlobHistory in CassandraChatMessageHistory (#20666) 2024-04-22 17:11:05 -04:00
Bagatur
eb18f4e155 infra: rm sep repo partner dirs (#20756)
so you can `poetry run pip install -e libs/partners/*/` to your hearts
content
2024-04-22 14:05:39 -07:00
Bagatur
2a11a30572 docs: automatically add api ref links (#20755)
![Screenshot 2024-04-22 at 1 51 13
PM](https://github.com/langchain-ai/langchain/assets/22008038/b8b09fec-3800-4b97-bd26-5571b8308f4a)
2024-04-22 14:05:29 -07:00
Eugene Yurtsev
936c6cc74a langchain[patch]: Add missing deprecation for openai adapters (#20668)
Add missing deprecation for openai adapters
2024-04-22 14:05:55 -04:00
Eugene Yurtsev
38adbfdf34 community[patch],core[minor]: Move BaseToolKit to core.tools (#20669) 2024-04-22 14:04:30 -04:00
Mark Needham
ce23f8293a Community patch clickhouse make it possible to not specify index (#20460)
Vector indexes in ClickHouse are experimental at the moment and can
sometimes break/change behaviour. So this PR makes it possible to say
that you don't want to specify an index type.

Any queries against the embedding column will be brute force/linear
scan, but that gives reasonable performance for small-medium dataset
sizes.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-22 10:46:37 -07:00
ccurme
c010ec8b71 patch: deprecate (a)get_relevant_documents (#20477)
- `.get_relevant_documents(query)` -> `.invoke(query)`
- `.get_relevant_documents(query=query)` -> `.invoke(query)`
- `.get_relevant_documents(query, callbacks=callbacks)` ->
`.invoke(query, config={"callbacks": callbacks})`
- `.get_relevant_documents(query, **kwargs)` -> `.invoke(query,
**kwargs)`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-22 11:14:53 -04:00
A Noor
939d113d10 docs: Fixed grammar mistake (#20697)
Description: Changed "You are" to "You are a". Grammar issue.
Dependencies: None

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-22 02:55:05 +00:00
Matheus Henrique Raymundo
bb69819267 community: Fix the stop sequence key name for Mistral in Bedrock (#20709)
Fixing the wrong stop sequence key name that causes an error on AWS
Bedrock.
You can check the MistralAI bedrock parameters
[here](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-mistral.html)
This change fixes this
[issue](https://github.com/langchain-ai/langchain/issues/20095)
2024-04-21 20:06:06 -04:00
Bagatur
1c7b3c75a7 community[patch], experimental[patch]: support tool-calling sql and p… (#20639)
d agents
2024-04-21 15:43:09 -07:00
Bagatur
d0cee65cdc langchain[patch]: langchain-pinecone self query support (#20702) 2024-04-21 15:42:39 -07:00
Leonid Kuligin
5ae738c4fe docs: on google-genai vs google-vertexai (#20713)
Thank you for contributing to LangChain!

- [ ] **PR title**: "docs: added a description of differences
langchain_google_genai vs langchain_google_vertexai"


- [ ]
- **Description:** added a description of differences
langchain_google_genai vs langchain_google_vertexai
2024-04-21 12:53:19 -07:00
shumway743
cb6e5e56c2 community[minor]: add graph store implementation for apache age (#20582)
**Description:** implemented GraphStore class for Apache Age graph db

**Dependencies:** depends on psycopg2

Unit and integration tests included. Formatting and linting have been
run.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-20 14:31:04 -07:00
Christophe Bornet
c909ae0152 community[minor]: Add async methods to CassandraVectorStore (#20602)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-20 02:09:58 +00:00
Leonid Ganeline
06d18c106d langchain[patch]: example_selector import fix (#20676)
Cleaned up updated imports
2024-04-19 21:42:18 -04:00
Leonid Ganeline
d6470aab60 langchain: dosctore import fix (#20678)
Cleaned up imports
2024-04-19 21:41:36 -04:00
Leonid Ganeline
3a750e130c templates: utilities import fix (#20679)
Updated imports from `from langchain.utilities` to `from
langchain_community.utilities`
2024-04-19 21:41:15 -04:00
Dmitry Tyumentsev
f111efeb6e community[patch]: YandexGPT API add ability to disable request logging (#20670)
Closes (#20622)

Added the ability to [disable logging of requests to
YandexGPT](https://yandex.cloud/en/docs/foundation-models/operations/yandexgpt/disable-logging).
2024-04-19 21:40:37 -04:00
Erick Friis
e5f5d9ff56 docs: aws listing (#20674) 2024-04-19 21:27:35 +00:00
Mateusz Szewczyk
75ffe51bbe ibm: Add support for Embedding Models (#20647)
---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-19 20:56:24 +00:00
Erick Friis
73809817ff community: release 0.0.34 (#20672) 2024-04-19 12:44:41 -07:00
Tomaz Bratanic
e4b38e2822 Update neo4j cypher templates to the function callback (#20515)
Update Neo4j Cypher templates to use function callback to pass context
instead of passing it in user prompt.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-19 18:33:32 +00:00
Tomaz Bratanic
3d9b26fc28 Update neo4j vector documentation (#20455)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-19 18:32:13 +00:00
Tomaz Bratanic
8c08cf4619 community: Add support for relationship indexes in neo4j vector (#20657)
Neo4j has added relationship vector indexes.
We can't populate them, but we can use existing indexes for retrieval
2024-04-19 11:22:42 -07:00
Erick Friis
940242c1ec core: release 0.1.45 (#20664) 2024-04-19 09:55:02 -07:00
Saurabh Chalke
3dd6266bcc docs: Remove Duplicate --quiet Flag in Installation Command in LangSmith Docs (#20121)
**Description:** This pull request removes a duplicated `--quiet` flag
in the pip install command found in the LangSmith Walkthrough section of
the documentation.

**Issue:** N/A

**Dependencies:** None
2024-04-19 11:16:44 -04:00
Aditya
6a97448928 Updated Tutorials for Vertex Vector Search (#20376)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: docs"


- [ ] **PR message**: 
    - **Description:** Updated Tutorials for Vertex Vector Search
    - **Issue:** NA
    - **Dependencies:** NA
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!

@lkuligin for review

---------

Co-authored-by: adityarane@google.com <adityarane@google.com>
Co-authored-by: Leonid Kuligin <lkuligin@yandex.ru>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-19 10:38:00 -04:00
Boris Djurdjevic
c5aab9afe3 docs: Fix minor typo in data_connection/document_loaders/custom (#20648)
**Description:**
Minor documentation typo fix in
`data_connection/document_loaders/custom`: `thta's` -> `that's`
2024-04-19 14:17:00 +00:00
Souls-R
36084e7500 docs: fix variable name typo in example code (#20658)
This pull request corrects a mistake in the variable name within the
example code. The variable doc_schema has been changed to dog_schema to
fix the error.
2024-04-19 14:08:25 +00:00
Leonid Ganeline
beebd73f95 docs: integrations/retrievers cleanup (#20357)
Fixed format inconsistencies; added descriptions, links.
2024-04-19 10:02:41 -04:00
Leonid Ganeline
0b99e9201d docs: providers alibaba update (#20560)
Added missed integrations to the Alibaba Cloud provider page
2024-04-18 23:11:17 -07:00
Leonid Ganeline
27a4682415 docs: imports update (#20625)
Updated imports in docs

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-18 23:04:07 -07:00
Ethan Yang
53ae77b13e docs: Update openvino example documents links (#20638) 2024-04-18 22:57:28 -07:00
Sivaudha
baedc3ec0a langchain[minor]: Databricks vector search self query integration (#20627)
- Enable self querying feature for databricks vector search

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-19 03:44:38 +00:00
ccurme
6d530481c1 openai: fix allowed block types (#20636) 2024-04-18 22:12:57 -04:00
Erick Friis
764871f97d infra: add test-doc-imports to ci failure (#20637) 2024-04-19 02:06:57 +00:00
Erick Friis
5c216ad08f upstage[patch]: un-xfail tool calling test, release 0.1.0 (#20635) 2024-04-19 02:02:21 +00:00
Nuno Campos
48307e46a3 core[patch]: Fix runnable map ser/de (#20631) 2024-04-18 18:52:33 -07:00
Charlie Holtz
1cbab0ebda community: update Replicate to work with official models (#20633)
Description: you don't need to pass a version for Replicate official
models. That was broken on LangChain until now!

You can now run: 

```
llm = Replicate(
    model="meta/meta-llama-3-8b-instruct",
    model_kwargs={"temperature": 0.75, "max_length": 500, "top_p": 1},
)
prompt = """
User: Answer the following yes/no question by reasoning step by step. Can a dog drive a car?
Assistant:
"""
llm(prompt)
```

I've updated the replicate.ipynb to reflect that.

twitter: @charliebholtz

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-19 01:43:40 +00:00
Congyu
dd5139e304 community[patch]: truncate zhipuai temperature and top_p parameters to [0.01, 0.99] (#20261)
ZhipuAI API only accepts `temperature` parameter between `(0, 1)` open
interval, and if `0` is passed, it responds with status code `400`.

However, 0 and 1 is often accepted by other APIs, for example, OpenAI
allows `[0, 2]` for temperature closed range.

This PR truncates temperature parameter passed to `[0.01, 0.99]` to
improve the compatibility between langchain's ecosystem's and ZhipuAI
(e.g., ragas `evaluate` often generates temperature 0, which results in
a lot of 400 invalid responses). The PR also truncates `top_p` parameter
since it has the same restriction.

Reference: [glm-4 doc](https://open.bigmodel.cn/dev/api#glm-4) (which
unfortunately is in Chinese though).

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-19 01:31:30 +00:00
Lance Martin
d5c22b80a5 community[patch]: Fix Ollama for LLaMA3 (#20624)
We see verbose generations w/ LLaMA3 and Ollama - 

https://smith.langchain.com/public/88c4cd21-3d57-4229-96fe-53443398ca99/r

--- 

Fix here implies that when stop was being set to an empty list, the
stream had no conditions under which to stop, which could lead to
excessive or unintended output.

Test LLaMA2 - 

https://smith.langchain.com/public/57dfc64a-591b-46fa-a1cd-8783acaefea2/r

Test LLaMA3 - 

https://smith.langchain.com/public/76ff5f47-ac89-4772-a7d2-5caa907d3fd6/r

https://smith.langchain.com/public/a31d2fad-9094-4c93-949a-964b27630ccb/r

Test Mistral -

https://smith.langchain.com/public/a4fe7114-c308-4317-b9fd-6c86d31f1c5b/r

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-19 00:20:32 +00:00
Erick Friis
726234eee5 infra: fix doc imports ci (#20629) 2024-04-18 23:42:03 +00:00
Erick Friis
3425988de7 core: deprecation default to qualname (#20578) 2024-04-18 15:35:17 -07:00
hulitaitai
7d0a008744 community[minor]: Add audio-parser "faster-whisper" in audio.py (#20012)
faster-whisper is a reimplementation of OpenAI's Whisper model using
CTranslate2, which is up to 4 times faster than enai/whisper for the
same accuracy while using less memory. The efficiency can be further
improved with 8-bit quantization on both CPU and GPU.

It can automatically detect the following 14 languages and transcribe
the text into their respective languages: en, zh, fr, de, ja, ko, ru,
es, th, it, pt, vi, ar, tr.

The gitbub repository for faster-whisper is :
    https://github.com/SYSTRAN/faster-whisper

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-18 20:50:59 +00:00
Guangdong Liu
e3c2431c5b comminuty[patch]:Fix Error in apache doris insert (#19989)
- **Issue:** #19886
2024-04-18 16:34:32 -04:00
naaive
6f0d4f3f09 docs: Update body_func to hybrid_query in ElasticsearchRetriever (#20498) 2024-04-18 20:19:02 +00:00
Tomaz Bratanic
27370b679e community[patch]: Ignore null and invalid embedding values for neo4j metadata filtering (#20558) 2024-04-18 16:15:45 -04:00
Eugene Yurtsev
718c9cbe3a mistral[patch]: Support both model and model_name (#20557) 2024-04-18 16:12:33 -04:00
Eugene Yurtsev
e3bd521654 docs: Remove example vsdx data (#20620)
VSDX data contains EMF files. Some of these apparently can contain
exploits with some Adobe tools.

This is likely a false positive from antivirus software, but we
can remove it nonetheless.
2024-04-18 16:10:40 -04:00
Dhruv Chawla
c0548eb632 docs: Update uptrain.ipynb to show outputs (#20551)
Hey @eyurtsev, I noticed that the notebook isn't displaying the outputs
properly. I've gone ahead and rerun the cells to ensure that readers can
easily understand the functionality without having to run the code
themselves.
2024-04-18 16:10:23 -04:00
Leonid Ganeline
95dc90609e experimental[patch]: prompts import fix (#20534)
Replaced `from langchain.prompts` with `from langchain_core.prompts`
where it is appropriate.
Most of the changes go to `langchain_experimental`
Similar to #20348
2024-04-18 16:09:11 -04:00
Massimiliano Pronesti
2542a09abc community[patch]: AzureSearch incorrectly converted to retriever (#20601)
Closes #20600.

Please see the issue for more details.
2024-04-18 16:06:47 -04:00
Leonid Ganeline
520ef24fb9 docs: import update (#20610)
Updated imports
2024-04-18 16:05:17 -04:00
Christophe Bornet
8f0b5687a3 community[minor]: Add hybrid search to Cassandra VectorStore (#20286)
Only supported by Astra DB at the moment.
**Twitter handle:** cbornet_
2024-04-18 15:58:43 -04:00
Christophe Bornet
d2d01370bc community[minor]: Add async methods to CassandraLoader (#20609)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-18 19:45:20 +00:00
Eugene Yurtsev
8c29b7bf35 mistralai[patch]: Use public attribute for eventsource.response (#20580)
Minor change, use the public attribute instead of the protected one.
2024-04-18 14:12:12 -04:00
Erick Friis
66fb0b1f35 core: fix fireworks mapping (#20613) 2024-04-18 18:08:40 +00:00
balloonio
e786da7774 community[patch]: Invoke callback prior to yielding token fix [HuggingFaceTextGenInference] (#20426)
…gFaceTextGenInference)

- [x] **PR title**: community[patch]: Invoke callback prior to yielding
token fix for [HuggingFaceTextGenInference]


- [x] **PR message**: 
- **Description:** Invoke callback prior to yielding token in stream
method in [HuggingFaceTextGenInference]
    - **Issue:** https://github.com/langchain-ai/langchain/issues/16913
    - **Dependencies:** None
    - **Twitter handle:** @bolun_zhang

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-18 14:25:20 +00:00
Ethan Yang
2d6d796040 community: Add save_model function for openvino reranker and embedding (#19896) 2024-04-18 10:20:33 -04:00
zR
9c1d7f2405 update zhipuai notebook (#20595)
fix timeout issue
fix zhipuai usecase notebookbook

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-18 10:12:12 -04:00
MajorDouble
9c175bc618 Update README.md -- broken hyperlink (#20422)
fixed broken `LangGraph` hyperlink

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-18 14:07:52 +00:00
Ikko Eltociear Ashimine
7a884eb416 Update RAPTOR.ipynb (#20586)
Langauge -> Language
2024-04-18 09:47:17 -04:00
Justsosostar
697d98cac9 fix typo in langchain/docs/docs/intergrations/tools/nuclia.ipynb (#20591)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-18 13:46:45 +00:00
ccurme
c897264b9b community: (milvus) check for num_shards (#20603)
@rgupta2508 I believe this change is necessary following
https://github.com/langchain-ai/langchain/pull/20318 because of how
Milvus handles defaults:


59bf5e811a/pymilvus/client/prepare.py (L82-L85)
```python
num_shards = kwargs[next(iter(same_key))]
if not isinstance(num_shards, int):
    msg = f"invalid num_shards type, got {type(num_shards)}, expected int"
    raise ParamError(message=msg)
req.shards_num = num_shards
```
this way lets Milvus control the default value (instead of maintaining a
separate default in Langchain).

Let me know if I've got this wrong or you feel it's unnecessary. Thanks.
2024-04-18 09:44:56 -04:00
Rohit Gupta
25c4c24e89 Support to create shards_num in milvus vectorstores (#20318)
To support number of the shards for the collection to create in milvus
vvectorstores.

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-18 08:58:00 -04:00
aditya thomas
8bad536c6c docs[callbacks]: update to the FileCallbackHandler documentation (#20496)
**Description:** Update to the `FileCallbackHandler` documentation
**Issue:** #20493 
**Dependencies:** None
2024-04-17 22:32:21 -04:00
aditya thomas
cea379e7c7 community, core[callbacks]: move FileCallbackHandler from community to core (#20495)
**Description:** Move `FileCallbackHandler` from community to core
**Issue:** #20493 
**Dependencies:** None

(imo) `FileCallbackHandler` is a built-in LangChain callback handler
like `StdOutCallbackHandler` and should properly be in in core.
2024-04-17 22:29:30 -04:00
Erick Friis
084bedd16e docs: nits (#20577) 2024-04-18 00:20:44 +00:00
Erick Friis
e7e94b37f1 upstage: fix core dep (#20576) 2024-04-17 16:33:09 -07:00
Erick Friis
e395115807 docs: aws docs updates (#20571) 2024-04-17 23:32:00 +00:00
Erick Friis
f09bd0b75b upstage: init package (#20574)
Co-authored-by: Sean Cho <sean@upstage.ai>
Co-authored-by: JuHyung-Son <sonju0427@gmail.com>
2024-04-17 23:25:36 +00:00
Marco Perini
11c9ed3362 community[patch]: exposing headless flag parameter to AsyncChromiumLoader class (#20424)
- **Description:** added the headless parameter as optional argument to
the langchain_community.document_loaders AsyncChromiumLoader class
  - **Dependencies:** None
  - **Twitter handle:** @perinim_98

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-17 16:00:28 -07:00
Bagatur
54e9271504 anthropic[patch]: fix msg mutation (#20572) 2024-04-17 15:47:19 -07:00
Nuno Campos
719da8746e core: fix attributeerror in runnablelambda.deps (#20569)
- would happen when user's code tries to access attritbute that doesnt
exist, we prefer to let this crash in the user's code, rather than here
- also catch more cases where a runnable is invoked/streamed inside a
lambda. before we weren't seeing these as deps
2024-04-17 15:38:39 -07:00
Jacob Lee
8b09e81496 Lock low level dep to fix Vercel docs build (#20573)
@baskaryan @efriis 

TODO: Figure out why our lockfile isn't being respected here
2024-04-17 15:21:28 -07:00
Christophe Bornet
a22da4315b community[patch]: Replace function in CassandraVectorStore with simpler lambda (#20323) 2024-04-17 17:13:13 -04:00
Christophe Bornet
75733c5cc1 community[minor]: Improve CassandraVectorStore from_texts (#20284) 2024-04-17 17:12:28 -04:00
Tomer Cagan
463160c3f6 community: fix DirectoryLoader progress bar (#19821)
**Description:** currently, the `DirectoryLoader` progress-bar maximum value is based on an incorrect number of files to process

In langchain_community/document_loaders/directory.py:127:

```python
        paths = p.rglob(self.glob) if self.recursive else p.glob(self.glob)
        items = [
            path
            for path in paths
            if not (self.exclude and any(path.match(glob) for glob in self.exclude))
        ]
```

`paths` returns both files and directories. `items` is later used to determine the maximum value of the progress-bar which gives an incorrect progress indication.
2024-04-17 21:12:16 +00:00
Bagatur
984e7e36c2 anthropic[patch]: Release 0.1.10 (#20568) 2024-04-17 14:05:42 -07:00
Pengcheng Liu
ecd19a9e58 community[patch]: Add function call support in Tongyi chat model. (#20119)
- [ ] **PR message**: 
- **Description:** This pr adds function calling support in Tongyi chat
model.
    - **Issue:** None
    - **Dependencies:** None
    - **Twitter handle:** None

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-17 20:42:23 +00:00
kaijietti
80679ab906 zep[patch]: implement add_messages and aadd_messages (#20099)
This PR implement `add_messages` and `aadd_messages` to avoid
unnecessary round-trips.
2024-04-17 13:40:24 -07:00
Guangdong Liu
55dd349472 docs: Get rid of ZeroShotAgent and use create_react_agent instead (#20154)
- **Issue:** close #20122
 - @baskaryan, @eyurtsev.
2024-04-17 13:35:14 -07:00
Guangdong Liu
1e3b07aae2 docs: Get rid of ZeroShotAgent and use create_react_agent instead (#20155)
- **Issue:** #20122
- @baskaryan,@eyurtsev
2024-04-17 13:34:57 -07:00
ccurme
2238490069 mistral, openai: allow anthropic-style messages in message histories (#20565) 2024-04-17 15:55:45 -04:00
Eugene Yurtsev
7a7851aa06 anthropic[patch]: Handle empty text block (#20566)
Handle empty text block
2024-04-17 15:37:04 -04:00
Bagatur
7917e2c418 core[patch]: Release 0.1.44 (#20564) 2024-04-17 18:34:44 +00:00
ccurme
4a17951900 mistral: read tool calls from AIMessage (#20554)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-17 13:38:24 -04:00
Eugene Yurtsev
f257909699 mistralai[patch]: Surface http errors (#20555)
Do not swallow errors when streaming with httpx.

Update affected code if this PR gets merged to httpx:
https://github.com/florimondmanca/httpx-sse/pull/25/files
2024-04-17 10:47:56 -04:00
Sevin F. Varoglu
3f156e0ece community[minor]: add ChatOctoAI (#20059)
This PR adds ChatOctoAI, a chat model integration for OctoAI.
2024-04-17 03:20:56 -07:00
Eun Hye Kim
b34f1086fe community[patch]: Add streaming logic in ChatHuggingFace (#18784)
- Add functions (_stream, _astream)
- Connect to _generate and _agenerate

Thank you for contributing to LangChain!

- [x] **PR title**: "community: Add streaming logic in ChatHuggingFace"

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Addition functions (_stream, _astream) and connection
to _generate and _agenerate
    - **Issue:** #18782
    - **Dependencies:** none
    - **Twitter handle:** @lunara_x
2024-04-16 19:17:03 -07:00
Bagatur
c05c379b26 docs: add structred output to feat table (#20539) 2024-04-16 19:14:26 -07:00
pjb157
479be3cc91 community[minor]: Unify Titan Takeoff Integrations and Adding Embedding Support (#18775)
**Community: Unify Titan Takeoff Integrations and Adding Embedding
Support**

 **Description:** 
Titan Takeoff no longer reflects this either of the integrations in the
community folder. The two integrations (TitanTakeoffPro and
TitanTakeoff) where causing confusion with clients, so have moved code
into one place and created an alias for backwards compatibility. Added
Takeoff Client python package to do the bulk of the work with the
requests, this is because this package is actively updated with new
versions of Takeoff. So this integration will be far more robust and
will not degrade as badly over time.

**Issue:**
Fixes bugs in the old Titan integrations and unified the code with added
unit test converge to avoid future problems.

**Dependencies:**
Added optional dependency takeoff-client, all imports still work without
dependency including the Titan Takeoff classes but just will fail on
initialisation if not pip installed takeoff-client

**Twitter**
@MeryemArik9

Thanks all :)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-17 01:43:35 +00:00
Rahul Triptahi
2cbfc94bcb community[patch]: Add support for authorized identities in PebbloSafeLoader. (#20055)
Description: Add support for authorized identities in PebbloSafeLoader.
Now with this change, PebbloSafeLoader will extract
authorized_identities from metadata and send it to pebblo server
Dependencies: None
Documentation: None

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-04-16 18:34:06 -07:00
Rahul Triptahi
475892ca0e docs: Add Documentation to enable authorized access identities in GoogleDriveLoader. (#20065)
Description: Document update.

GoogleDriveLoader: Added documentation for `load_auth` a new argument in
document_loaders/GoogleDriveLoader.

Dependencies: None
Documentation:
https://python.langchain.com/docs/integrations/document_loaders/google_drive/

Associated PR: https://github.com/langchain-ai/langchain-google/pull/110

Twitter handle: @rahul_tripathi2

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-04-16 18:33:10 -07:00
Guangdong Liu
b78ede2f96 community[patch]: standardize init args (#20166)
Related to https://github.com/langchain-ai/langchain/issues/20085

@baskaryan
2024-04-16 18:30:26 -07:00
Guangdong Liu
3729bec1a2 community[patch]: standardize init args (#20210)
Related to https://github.com/langchain-ai/langchain/issues/20085

@baskaryan
2024-04-16 18:29:57 -07:00
sdan
a7c5e41443 community[minor]: Added VLite as VectorStore (#20245)
Support [VLite](https://github.com/sdan/vlite) as a new VectorStore
type.

**Description**:
vlite is a simple and blazing fast vector database(vdb) made with numpy.
It abstracts a lot of the functionality around using a vdb in the
retrieval augmented generation(RAG) pipeline such as embeddings
generation, chunking, and file processing while still giving developers
the functionality to change how they're made/stored.

**Before submitting**:
Added tests
[here](c09c2ebd5c/libs/community/tests/integration_tests/vectorstores/test_vlite.py)
Added ipython notebook
[here](c09c2ebd5c/docs/docs/integrations/vectorstores/vlite.ipynb)
Added simple docs on how to use
[here](c09c2ebd5c/docs/docs/integrations/providers/vlite.mdx)

**Profiles**

Maintainers: @sdan
Twitter handles: [@sdand](https://x.com/sdand)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-17 01:24:38 +00:00
Hyeongchan Kim
7824291252 community[patch]: Fix not to cast to str type when file_path is None (#20057)
From `langchain_community 0.0.30`, there's a bug that cannot send a
file-like object via `file` parameter instead of `file path` due to
casting the `file_path` to str type even if `file_path` is None.

which means that when I call the `partition_via_api()`, exactly one of
`filename` and `file` must be specified by the following error message.

however, from `langchain_community 0.0.30`, `file_path` is casted into
`str` type even `file_path` is None in `get_elements_from_api()` and got
an error at `exactly_one(filename=filename, file=file)`.

here's an error message
```
---> 51     exactly_one(filename=filename, file=file)
     53     if metadata_filename and file_filename:
     54         raise ValueError(
     55             "Only one of metadata_filename and file_filename is specified. "
     56             "metadata_filename is preferred. file_filename is marked for deprecation.",
     57         )

File /opt/homebrew/lib/python3.11/site-packages/unstructured/partition/common.py:441, in exactly_one(**kwargs)
    439 else:
    440     message = f"{names[0]} must be specified."
--> 441 raise ValueError(message)

ValueError: Exactly one of filename and file must be specified.
```

So, I simply made a change that casting to str type when `file_path` is
not None.

I use `UnstructuredAPIFileLoader` like below.

```
from langchain_community.document_loaders.unstructured import UnstructuredAPIFileLoader

documents: list = UnstructuredAPIFileLoader(
    file_path=None,
    file=file,  # file-like object, io.BytesIO type
    mode='elements',
    url='http://127.0.0.1:8000/general/v0/general',
    content_type='application/pdf',
    metadata_filename='asdf.pdf',
).load_and_split()
```
2024-04-16 18:06:21 -07:00
Prashanth Rao
295b9b704b community[patch]: Improve Kuzu Cypher generation prompt (#20481)
- [x] **PR title**: "community: improve kuzu cypher generation prompt"

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Improves the Kùzu Cypher generation prompt to be more
robust to open source LLM outputs
    - **Issue:** N/A
    - **Dependencies:** N/A
    - **Twitter handle:** @kuzudb

- [x] **Add tests and docs**: If you're adding a new integration, please
include
No new tests (non-breaking. change)

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-04-16 18:01:36 -07:00
MacanPN
bce69ae43d community[patch]: Changes to base_o365 and sharepoint document loaders (#20373)
## Description:
The PR introduces 3 changes:
1. added `recursive` property to `O365BaseLoader`. (To keep the behavior
unchanged, by default is set to `False`). When `recursive=True`,
`_load_from_folder()` also recursively loads all nested folders.
2. added `folder_id` to SharePointLoader.(similar to (this
PR)[https://github.com/langchain-ai/langchain/pull/10780] ) This
provides an alternative to `folder_path` that doesn't seem to reliably
work.
3. when none of `document_ids`, `folder_id`, `folder_path` is provided,
the loader fetches documets from root folder. Combined with
`recursive=True` this provides an easy way of loading all compatible
documents from SharePoint.

The PR contains the same logic as [this stale
PR](https://github.com/langchain-ai/langchain/pull/10780) by
@WaleedAlfaris. I'd like to ask his blessing for moving forward with
this one.

## Issue:
- As described in https://github.com/langchain-ai/langchain/issues/19938
and https://github.com/langchain-ai/langchain/pull/10780 the sharepoint
loader often does not seem to work with folder_path.
- Recursive loading of subfolders is a missing functionality

## Dependecies: None

Twitter handle:
@martintriska1 @WRhetoric

This is my first PR here, please be gentle :-)
Please review @baskaryan

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-17 00:36:15 +00:00
Sevin F. Varoglu
54d388d898 community[patch]: update OctoAI endpoint to subclass BaseOpenAI (#19757)
This PR updates OctoAIEndpoint LLM to subclass BaseOpenAI as OctoAI is
an OpenAI-compatible service. The documentation and tests have also been
updated.
2024-04-16 17:32:20 -07:00
Erick Friis
0c95ddbcd8 docs: add snowflake provider page (#20538) 2024-04-17 00:31:27 +00:00
Benito Geordie
57b226532d community[minor]: Added integrations for ThirdAI's NeuralDB as a Retriever (#17334)
**Description:** Adds ThirdAI NeuralDB retriever integration. NeuralDB
is a CPU-friendly and fine-tunable text retrieval engine. We previously
added a vector store integration but we think that it will be easier for
our customers if they can also find us under under
langchain-community/retrievers.

---------

Co-authored-by: kartikTAI <129414343+kartikTAI@users.noreply.github.com>
Co-authored-by: Kartik Sarangmath <kartik@thirdai.com>
2024-04-16 16:36:55 -07:00
WeichenXu
e9fc87aab1 community[patch]: Make ChatDatabricks model supports streaming response (#19912)
**Description:** Make ChatDatabricks model supports stream
**Issue:** N/A
**Dependencies:** MLflow nightly build version (we will release next
MLflow version soon)
**Twitter handle:** N/A

Manually test:

(Before testing, please install `pip install
git+https://github.com/mlflow/mlflow.git`)

```python
# Test Databricks Foundation LLM model
from langchain.chat_models import ChatDatabricks

chat_model = ChatDatabricks(
    endpoint="databricks-llama-2-70b-chat",
    max_tokens=500
)
from langchain_core.messages import AIMessageChunk

for chunk in chat_model.stream("What is mlflow?"):
  print(chunk.content, end="|")
```

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Signed-off-by: Weichen Xu <weichen.xu@databricks.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-16 23:34:49 +00:00
ccurme
a892f985d3 standardized-tests[patch]: test tool call messages (#20519)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-16 23:25:50 +00:00
Erick Friis
e7fe5f7d3f anthropic[patch]: serialization in partner package (#18828) 2024-04-16 16:05:58 -07:00
Bagatur
f74d5d642e anthropic[patch]: bump to core 0.1.43 (#20537) 2024-04-16 22:47:07 +00:00
Bagatur
96d8769eae anthropic[patch]: release 0.1.9, use tool calls if content is empty (#20535) 2024-04-16 15:27:29 -07:00
Erick Friis
6adca37eb7 core: default chat/llm _identifying_params to lc_attributes (#20232) 2024-04-16 14:55:47 -07:00
ccurme
22da9f5f3f update scheduled tests (#20526)
repurpose scheduled tests to test over provider packages
2024-04-16 16:49:46 -04:00
Nuno Campos
806a54908c Runnable graph viz improvements (#20529)
- Add conditional: bool property to json representation of the graphs
- Add option to generate mermaid graph stripped of styles (useful as a
text representation of graph)
2024-04-16 20:17:47 +00:00
Nuno Campos
f3aa26d6bf Fix getattr in runnable binding for cases where config is passed in as arg too (#20528)
…s arg too

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-16 13:10:29 -07:00
Dhruv Chawla
d6d559d50d community[minor]: add UpTrainCallbackHandler (#19956)
- **Description:** 
This PR adds a callback handler for UpTrain. It performs evaluations in
the RAG pipeline to check the quality of retrieved documents, generated
queries and responses.

- **Dependencies:** 
    - The UpTrainCallbackHandler requires the uptrain package

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-04-16 19:32:03 +00:00
Bagatur
07f23bd4ff docs: response metadata (#20527) 2024-04-16 12:17:27 -07:00
Leonid Ganeline
45d045b2c5 core[minor], langchain[patch]: tools dependencies refactoring (#18759)
The `langchain.tools`
[namespace](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.tools)
can be completely eliminated by moving one class and 3 functions into
`core`. It makes sense since the class and functions are very core.
2024-04-16 14:15:09 -04:00
Erick Friis
77eba10f47 standard-tests: fix default fixtures (#20520) 2024-04-16 16:12:36 +00:00
Ravindu Somawansa
5acc7ba622 community[minor]: Add glue catalog loader (#20220)
Add Glue Catalog loader
2024-04-16 11:39:23 -04:00
Dawson Bauer
aab075345e core[patch]: Fix imports defined in messages sub-package (#20500)
core[patch]: Fix imports defined in messages sub-package (#20500)
2024-04-16 14:19:51 +00:00
Fayfox
9fd36efdb5 anthropic[patch]: env ANTHROPIC_API_URL not work (#20507)
enviroment variable ANTHROPIC_API_URL will not work if anthropic_api_url
has default value

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-04-16 10:16:51 -04:00
Martín Gotelli Ferenaz
b48add4353 community[patch]: Fix pgvector deprecated filter clause usage with OR and AND conditions (#20446)
**Description**: Support filter by OR and AND for deprecated PGVector
version
**Issue**: #20445 
**Dependencies**: N/A
**Twitter** handle: @martinferenaz
2024-04-16 14:08:07 +00:00
Eugene Yurtsev
c50099161b community[patch]: Use uuid4 not uuid1 (#20487)
Using UUID1 is incorrect since it's time dependent, which makes it easy
to generate the exact same uuid
2024-04-16 09:40:44 -04:00
Bagatur
f7667c614b docs: update tool use case (#20404) 2024-04-16 04:27:27 +00:00
Erick Friis
86cf1d3ee1 community: release 0.0.33 (#20490) 2024-04-16 00:30:05 +00:00
Erick Friis
90184255f8 core: release 0.1.43 (#20489) 2024-04-15 22:48:34 +00:00
Erick Friis
7997f3b7f8 core: forward config params to default (#20402)
nuno's fault not mine

---------

Co-authored-by: Nuno Campos <nuno@boringbits.io>
Co-authored-by: Nuno Campos <nuno@langchain.dev>
2024-04-15 15:42:39 -07:00
Nuno Campos
97b2191e99 core: Add concept of conditional edge to graph rendering (#20480)
- implement for mermaid, graphviz and ascii
- this is to be used in langgraph
2024-04-15 13:49:06 -07:00
Averi Kitsch
30b00090ef docs: Add Google Firestore Vectorstore doc (#20078)
- **Description:**Add Google Firestore Vector store docs
    - **Issue:** NA
    - **Dependencies:** NA

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-15 20:09:32 +00:00
Leonid Kuligin
cc3c343673 docs: changed model's name in google-vertex-ai integration to a publicly available model (#20482)
docs: changed model's name in google-vertex-ai integration to a publicly
available model
2024-04-15 15:18:27 -04:00
Leonid Ganeline
7ea80bcb22 docs: tutorials update (#20483)
Added the `freeCodeCamp` tutorials link
2024-04-15 15:17:32 -04:00
Ángel Igareta
60c7a17781 Remove logic to exclude intermediate nodes from rendering time (#20459)
Description: For simplicity, migrate the logic of excluding intermediate
nodes in the .get_graph() of langgraph package
(https://github.com/langchain-ai/langgraph/pull/310) at graph creation
time instead of graph rendering time.

Note: #20381 needs to be approved first

---------

Co-authored-by: Angel Igareta <angel.igareta@klarna.com>
Co-authored-by: Nuno Campos <nuno@langchain.dev>
Co-authored-by: Nuno Campos <nuno@boringbits.io>
2024-04-15 16:40:51 +00:00
Mohammed Noumaan Ahamed
4dd05791a2 docs: quickstart retrieval chain for Cohere(API) (#20475)
- **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


Description: fixes LangChainDeprecationWarning: The class
`langchain_community.embeddings.cohere.CohereEmbeddings` was deprecated
in langchain-community 0.0.30 and will be removed in 0.2.0. An updated
version of the class exists in the langchain-cohere package and should
be used instead. To use it run `pip install -U langchain-cohere` and
import as `from langchain_cohere import CohereEmbeddings`.

![Screenshot 2024-04-15
200948](https://github.com/langchain-ai/langchain/assets/93511919/085b967d-a6fd-42c6-9404-faab8c5630ec)



Dependencies : langchain_cohere

Twitter handle: @Mo_Noumaan
2024-04-15 11:28:39 -04:00
Ángel Igareta
d55a365c6c Fix CDN URL in mermaid graph renderer (#20381)
Description of features on mermaid graph renderer:
- Fixing CDN to use official Mermaid JS CDN:
https://www.jsdelivr.com/package/npm/mermaid?tab=files
- Add device_scale_factor to allow increasing quality of resulting PNG.
2024-04-15 08:01:35 -07:00
Eugene Yurtsev
3cbc4693f5 docs: Add integration doc for postgres vectorstore (#20473)
Adds a postgres vectorstore via langchain-postgres.
2024-04-15 14:20:27 +00:00
Leonid Kuligin
676c68d318 community[patch]: deprecating remaining google_community integrations (#20471)
Deprecating remaining google community integrations
2024-04-15 09:57:12 -04:00
balloonio
b66a4f48fa community[patch]: Invoke callback prior to yielding token fix [DeepInfra] (#20427)
- [x] **PR title**: community[patch]: Invoke callback prior to yielding
token fix for [DeepInfra]


- [x] **PR message**: 
- **Description:** Invoke callback prior to yielding token in stream
method in [DeepInfra]
    - **Issue:** https://github.com/langchain-ai/langchain/issues/16913
    - **Dependencies:** None
    - **Twitter handle:** @bolun_zhang

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-14 14:32:52 -04:00
Juan Carlos José Camacho
450c458f8f community[minor]: Add Datahareld tool (#19680)
**Description:** Integrate [dataherald](https://www.dataherald.com)
tool, It is a natural language-to-SQL tool.
**Dependencies:** Install dataherald sdk to use it,
```
pip install dataherald
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Christophe Bornet <cbornet@hotmail.com>
2024-04-13 23:27:16 +00:00
Alexander Smirnov
ece008f117 docs: Refine RunnablePassthrough docstring (#19812)
Description: This update refines the documentation for
`RunnablePassthrough` by removing an unnecessary import and correcting a
minor syntactical error in the example provided. This change enhances
the clarity and correctness of the documentation, ensuring that users
have a more accurate guide to follow.

Issue: N/A

Dependencies: None

This PR focuses solely on documentation improvements, specifically
targeting the `RunnablePassthrough` class within the `langchain_core`
module. By clarifying the example provided in the docstring, users are
offered a more straightforward and error-free guide to utilizing the
`RunnablePassthrough` class effectively.

As this is a documentation update, it does not include changes that
require new integrations, tests, or modifications to dependencies. It
adheres to the guidelines of minimal package interference and backward
compatibility, ensuring that the overall integrity and functionality of
the LangChain package remain unaffected.

Thank you for considering this documentation refinement for inclusion in
the LangChain project.
2024-04-13 16:23:32 -07:00
Egor Krasheninnikov
c8391d4ff1 community[patch]: Fix YandexGPT embeddings (#19720)
Fix of YandexGPT embeddings. 

The current version uses a single `model_name` for queries and
documents, essentially making the `embed_documents` and `embed_query`
methods the same. Yandex has a different endpoint (`model_uri`) for
encoding documents, see
[this](https://yandex.cloud/en/docs/yandexgpt/concepts/embeddings). The
bug may impact retrievers built with `YandexGPTEmbeddings` (for instance
FAISS database as retriever) since they use both `embed_documents` and
`embed_query`.

A simple snippet to test the behaviour:
```python
from langchain_community.embeddings.yandex import YandexGPTEmbeddings
embeddings = YandexGPTEmbeddings()
q_emb = embeddings.embed_query('hello world')
doc_emb = embeddings.embed_documents(['hello world', 'hello world'])
q_emb == doc_emb[0]
```
The response is `True` with the current version and `False` with the
changes I made.


Twitter: @egor_krash

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-13 16:23:01 -07:00
Guangdong Liu
4be7ca7b4c community[patch]:sparkllm standardize init args (#20194)
Related to https://github.com/langchain-ai/langchain/issues/20085
@baskaryan
2024-04-13 16:03:19 -07:00
Rohit Agarwal
7d7a08e458 docs: Update Portkey provider integration (#20412)
**Description:** Updates the documentation for Portkey and Langchain.
Also updates the notebook. The current documentation is fairly old and
is non-functional.
**Twitter handle:** @portkeyai

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-13 23:01:48 +00:00
Yuki Oshima
0758da8940 community[patch]: Set default value for _ListSQLDatabaseToolInput tool_input (#20409)
**Description:**

`_ListSQLDatabaseToolInput` raise error if model returns `{}`.
For example, gpt-4-turbo returns `{}` with SQL Agent initialized by
`create_sql_agent`.

So, I set default value `""` for `_ListSQLDatabaseToolInput` tool_input.

This is actually a gpt-4-turbo issue, not a LangChain issue, but I
thought it would be helpful to set a default value `""`.

This problem is discussed in detail in the following Issue.

**Issue:** https://github.com/langchain-ai/langchain/issues/20405

**Dependencies:** none

Sorry, I did not add or change the test code, as tests for this
components was not exist .

However, I have tested the following code based on the [SQL Agent
Document](https://python.langchain.com/docs/use_cases/sql/agents/), to
make sure it works.

```
from langchain_community.agent_toolkits.sql.base import create_sql_agent
from langchain_community.utilities.sql_database import SQLDatabase
from langchain_openai import ChatOpenAI

db = SQLDatabase.from_uri("sqlite:///Chinook.db")
llm = ChatOpenAI(model="gpt-4-turbo", temperature=0)
agent_executor = create_sql_agent(llm, db=db, agent_type="openai-tools", verbose=True)
result = agent_executor.invoke("List the total sales per country. Which country's customers spent the most?")
print(result["output"])
```
2024-04-13 15:58:47 -07:00
Kenneth Choe
b507cd222b docs: changed the link to more helpful source (#20411)
docs: changed a link to better source

[Previous
link](https://www.philschmid.de/custom-inference-huggingface-sagemaker)
is about how to upload embeddings model.
[New
link](https://huggingface.co/blog/kchoe/deploy-any-huggingface-model-to-sagemaker)
is about how to upload cross encoder model, which directly addresses
what is needed here. For full disclosure, I wrote this article and the
sample `inference.py` is the result of this new article.

Co-authored-by: Kenny Choe <kchoe@amazon.com>
2024-04-13 15:54:33 -07:00
saberuster
160bcaeb93 text-splitters[minor]: Add lua code splitting (#20421)
- **Description:** Complete the support for Lua code in
langchain.text_splitter module.
- **Dependencies:** No
- **Twitter handle:** @saberuster

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-13 22:42:51 +00:00
ccurme
4b6b0a87b6 groq[patch]: Make stream robust to ToolMessage (#20417)
```python
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_groq import ChatGroq


prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad"),
    ]
)

model = ChatGroq(model_name="mixtral-8x7b-32768", temperature=0)

@tool
def magic_function(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2

tools = [magic_function]


agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": "what is the value of magic_function(3)?"})
```
```
> Entering new AgentExecutor chain...

Invoking: `magic_function` with `{'input': 3}`


5The value of magic\_function(3) is 5.

> Finished chain.
{'input': 'what is the value of magic_function(3)?',
 'output': 'The value of magic\\_function(3) is 5.'}
```
2024-04-13 15:40:55 -07:00
Leonid Ganeline
6dc4f592ba docs: tutorials update (#20401)
Added 3 new `LangChain.ai` playlists
2024-04-12 21:56:14 -04:00
ccurme
38faa74c23 community[patch]: update use of deprecated llm methods (#20393)
.predict and .predict_messages for BaseLanguageModel and BaseChatModel
2024-04-12 17:28:23 -04:00
Corey Zumar
3a068b26f3 community[patch]: Databricks - fix scope of dangerous deserialization error in Databricks LLM connector (#20368)
fix scope of dangerous deserialization error in Databricks LLM connector

---------

Signed-off-by: dbczumar <corey.zumar@databricks.com>
2024-04-12 17:27:26 -04:00
Bagatur
f1248f8d9a core[patch]: configurable init params (#20070)
Proposed fix for #20061. need to test

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-12 21:18:43 +00:00
Eugene Yurtsev
4808441d29 Docs: Add guide for implementing custom retriever (#20350)
Add longer guide for implementing custom retriever.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-04-12 17:18:35 -04:00
aditya thomas
4f75b230ed partner[ai21]: masking of the api key for ai21 models (#20257)
**Description:** Masking of the API key for AI21 models
**Issue:** Fixes #12165 for AI21
**Dependencies:** None

Note: This fix came in originally through #12418 but was possibly missed
in the refactor to the AI21 partner package


---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-12 20:19:31 +00:00
Leonid Ganeline
e512d3c6a6 langchain: callbacks imports fix (#20348)
Replaced all `from langchain.callbacks` into `from
langchain_core.callbacks` .
Changes in the `langchain` and `langchain_experimental`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-12 20:13:14 +00:00
Erick Friis
d83b720c40 templates: readme langsmith not private beta (#20173) 2024-04-12 13:08:10 -07:00
michael
525226fb0b docs: fix extraction/quickstart.ipynb example code (#20397)
- **Description**: The pydantic schema fields are supposed to be
optional but the use of `...` makes them required. This causes a
`ValidationError` when running the example code. I replaced `...` with
`default=None` to make the fields optional as intended. I also
standardized the format for all fields.
- **Issue**: n/a
- **Dependencies**: none
- **Twitter handle**: https://twitter.com/m_atoms

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-12 19:59:32 +00:00
balloonio
e7b1a44c5b community[patch]: Invoke callback prior to yielding token fix for Llamafile (#20365)
- [x] **PR title**: community[patch]: Invoke callback prior to yielding
token fix for Llamafile


- [x] **PR message**: 
- **Description:** Invoke callback prior to yielding token in stream
method in community llamafile.py
    - **Issue:** https://github.com/langchain-ai/langchain/issues/16913
    - **Dependencies:** None
    - **Twitter handle:** @bolun_zhang

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-12 19:26:12 +00:00
milind
1b272fa2f4 Update index.mdx (#20395)
spelling error fixed

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-12 19:22:08 +00:00
balloonio
93caa568f9 community[patch]: Invoke callback prior to yielding token fix for HuggingFaceEndpoint (#20366)
- [x] **PR title**: community[patch]: Invoke callback prior to yielding
token fix for HuggingFaceEndpoint


- [x] **PR message**: 
- **Description:** Invoke callback prior to yielding token in stream
method in community HuggingFaceEndpoint
    - **Issue:** https://github.com/langchain-ai/langchain/issues/16913
    - **Dependencies:** None
    - **Twitter handle:** @bolun_zhang

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-12 19:16:34 +00:00
Nicolas
ad04585e30 community[minor]: Firecrawl.dev integration (#20364)
Added the [FireCrawl](https://firecrawl.dev) document loader. Firecrawl
crawls and convert any website into LLM-ready data. It crawls all
accessible subpages and give you clean markdown for each.

    - **Description:** Adds FireCrawl data loader
    - **Dependencies:** firecrawl-py
    - **Twitter handle:** @mendableai 

ccing contributors: (@ericciarla @nickscamara)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-12 19:13:48 +00:00
Tomaz Bratanic
a1b105ac00 experimental[patch]: Skip pydantic validation for llm graph transformer and fix JSON response where possible (#19915)
LLMs might sometimes return invalid response for LLM graph transformer.
Instead of failing due to pydantic validation, we skip it and manually
check and optionally fix error where we can, so that more information
gets extracted
2024-04-12 11:29:25 -07:00
Erick Friis
20f5cd7c95 docs: langchain-chroma package (#20394) 2024-04-12 11:17:05 -07:00
Haris Ali
6786fa9186 docs: Adding api documentation link at the end of each output parser class description page. (#20391)
- **Description:** Added cross-links for easy access of api
documentation of each output parser class from it's description page.
  - **Issue:** related to issue #19969

Co-authored-by: Haris Ali <haris.ali@formulatrix.com>
2024-04-12 17:58:18 +00:00
P. Taylor Goetz
9317df7f16 community[patch]: Add "model" attribute to the payload sent to Ollama in ChatOllama (#20354)
Example Ollama API calls:

Request without "model":
```
curl --location 'http://localhost:11434/api/chat' \
--header 'Content-Type: application/json' \
--data '{
  "messages": [
    {
      "role": "user",
      "content": "What is the capitol of PA?"
    }
  ],
  "stream": false
}'
```
Response:
```
{"error":"model is required"}
```

Request with "model":
```
curl --location 'http://localhost:11434/api/chat' \
--header 'Content-Type: application/json' \
--data '{
  "model": "openchat",
  "messages": [
    {
      "role": "user",
      "content": "What is the capitol of PA?"
    }
  ],
  "stream": false
}'
```

Response:
```
{
  "eval_duration" : 733248000,
  "created_at" : "2024-04-11T23:04:08.735766843Z",
  "model" : "openchat",
  "message" : {
    "content" : " The capital city of Pennsylvania is Harrisburg.",
    "role" : "assistant"
  },
  "total_duration" : 3138731168,
  "prompt_eval_count" : 25,
  "load_duration" : 466562959,
  "done" : true,
  "prompt_eval_duration" : 1938495000,
  "eval_count" : 10
}
```
2024-04-12 13:32:53 -04:00
Bagatur
57bb940c17 docs: vertexai tool call update (#20362) 2024-04-12 10:09:54 -07:00
Alex Sherstinsky
fad0962643 community: for Predibase -- enable both Predibase-hosted and HuggingFace-hosted fine-tuned adapter repositories (#20370) 2024-04-12 08:32:00 -07:00
ccurme
5395c409cb docs: add Cohere to ChatModelTabs (#20386) 2024-04-12 10:35:10 -04:00
Eugene Yurtsev
6470b30173 langchain[patch]: Add deprecation warning to extraction chains (#20224)
Add deprecation warnings to extraction chains
2024-04-12 10:24:32 -04:00
Eugene Yurtsev
b65a1d4cfd langchain[patch]: Add another unit test for indexing code (#20387)
Add another unit test for indexing
2024-04-12 10:19:18 -04:00
Erick Friis
29282371db core: bind_tools interface on basechatmodel (#20360) 2024-04-12 01:32:19 +00:00
Erick Friis
e6806a08d4 multiple: standard chat model tests (#20359) 2024-04-11 18:23:13 -07:00
Bagatur
f78564d75c docs: show tool msg in tool call docs (#20358) 2024-04-11 16:42:04 -07:00
Isak Nyberg
bac9fb9a7c community: add gpt-4 pricing in callback (#20292)
Added the pricing for `gpt-4-turbo` and `gpt-4-turbo-2024-04-09` in the
callback method.
related to issue #17173 

https://openai.com/pricing#language-models
2024-04-11 18:02:39 -04:00
Ikko Eltociear Ashimine
cb29b42285 docs: Update ibm_watsonx.ipynb (#20329)
avaliable -> available


    - **Description:** fixed typo
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
2024-04-11 17:59:23 -04:00
Jack Wotherspoon
204a16addc docs: add Cloud SQL for MySQL vector store integration docs (#20278)
Adding docs page for `Google Cloud SQL for MySQL` vector store
integration. This was recently released as part of the Cloud SQL for
MySQL LangChain package
([release](https://github.com/googleapis/langchain-google-cloud-sql-mysql-python/releases/tag/v0.2.0))

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-11 21:57:46 +00:00
Leonid Ganeline
7cf2d2759d community[patch]: docstrings update (#20301)
Added missed docstrings. Format docstings to the consistent form.
2024-04-11 16:23:27 -04:00
Eugene Yurtsev
2900720cd3 core[patch]: Update documentation for base retriever (#20345)
Updating in code documentation for base retriever to direct folks toward
the .invoke and .ainvoke methods + explain how to implement
2024-04-11 16:20:14 -04:00
Bagatur
d2f4153fe6 docs: tool call nits (#20356) 2024-04-11 12:56:36 -07:00
Bagatur
eafd8c580b docs: tool agent nit (#20353) 2024-04-11 19:41:31 +00:00
Erick Friis
ec0273fc92 chroma: release 0.1.0 (#20355) 2024-04-11 12:39:52 -07:00
Bagatur
a889cd14f3 docs: use vertexai in chat model tabs (#20352) 2024-04-11 12:34:19 -07:00
Bagatur
9d302c1b57 docs: update anthropic tool call (#20344) 2024-04-11 11:38:26 -07:00
Erick Friis
da707d0755 chroma: remove relevance score int test (#20346)
deprecating feature in #20302
2024-04-11 11:29:33 -07:00
Eugene Yurtsev
de938a4451 docs: Update chat model providers include package information (#20336)
Include package information
2024-04-11 13:29:42 -04:00
Bagatur
56fe4ab382 docs: update tool-calling table (#20338) 2024-04-11 09:50:20 -07:00
Bagatur
43a98592c1 docs: tool agent nit (#20337) 2024-04-11 09:43:12 -07:00
Bagatur
562b546bcc docs: update chat openai (#20331) 2024-04-11 09:29:46 -07:00
Bagatur
2c4741b5ed docs: add tool-calling agent (#20328) 2024-04-11 09:29:40 -07:00
ccurme
f02e55aaf7 docs: add component page for tool calls (#20282)
Note: includes links to API reference pages for ToolCall and other
objects that currently don't exist (e.g.,
https://api.python.langchain.com/en/latest/messages/langchain_core.messages.tool.ToolCall.html#langchain_core.messages.tool.ToolCall).
2024-04-11 09:29:25 -07:00
Bagatur
6608089030 langchain[patch]: Release 0.1.16 (#20335) 2024-04-11 09:28:37 -07:00
Eugene Yurtsev
0e74fb4ec1 docs: Update list of chat models tool calling providers (#20330)
Will follow up with a few missing providers
2024-04-11 12:22:49 -04:00
Eugene Yurtsev
653489a1a9 docs: Update documentation for custom LLMs (#19972)
Update documentation for customizing LLMs
2024-04-11 12:21:27 -04:00
Bagatur
799714c629 release anthropic, fireworks, openai, groq, mistral (#20333) 2024-04-11 09:19:52 -07:00
Bagatur
e72330aacc core[patch]: Release 0.1.42 (#20332) 2024-04-11 09:10:27 -07:00
ccurme
795c728f71 mistral[patch]: add IDs to tool calls (#20299)
Mistral gives us one ID per response, no individual IDs for tool calls.

```python
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_mistralai import ChatMistralAI


prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad"),
    ]
)
model = ChatMistralAI(model="mistral-large-latest", temperature=0)

@tool
def magic_function(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2

tools = [magic_function]

agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": "what is the value of magic_function(3)?"})
```

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-11 11:09:30 -04:00
Eugene Yurtsev
22fd844e8a community[patch]: Add deprecation warnings to postgres implementation (#20222)
Add deprecation warnings to postgres implementation that are in langchain-postgres.
2024-04-11 10:33:22 -04:00
Eugene Yurtsev
f02f708f52 core[patch]: For now remove user warning (#20321)
Remove warning since it creates a lot of noise.
2024-04-11 10:33:01 -04:00
Mayank Solanki
f709ab4cdf docs: added backtick on RunnablePassthrough (#20310)
added backtick on RunnablePassthrough
Isuue: #20094
2024-04-11 08:39:10 -04:00
Bagatur
c706689413 openai[patch]: use tool_calls in request (#20272) 2024-04-11 03:55:52 -07:00
Bagatur
e936fba428 langchain[patch]: agents check prompt partial vars (#20303) 2024-04-11 03:55:09 -07:00
Bagatur
cb25fa0d55 core[patch]: fix ChatGeneration.text with content blocks (#20294) 2024-04-10 15:54:06 -07:00
Bagatur
03b247cca1 core[patch]: include tool_calls in ai msg chunk serialization (#20291) 2024-04-10 22:27:40 +00:00
Erick Friis
0fa551c278 chroma: bump rc, keep optional (#20298) 2024-04-10 14:22:56 -07:00
Erick Friis
16f8fff14f chroma: add required fastapi dep to restrict to <1 (#20297) 2024-04-10 14:16:13 -07:00
Erick Friis
991fd82532 chroma: add optional fastapi dep to restrict to <1 (#20295) 2024-04-10 12:49:44 -07:00
killind-dev
f8a54d1d73 chroma: Add chroma partner package (#19292)
**Description:** Adds chroma to the partners package. Tests & code
mirror those in the community package.
**Dependencies:** None
**Twitter handle:** @akiradev0x

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-10 19:33:45 +00:00
Yuki Watanabe
eef19954f3 core[patch]: fix duplicated kwargs in _load_sql_databse_chain (#19908)
`kwargs` is specified twice in [this
line](3218463f6a/libs/langchain/langchain/chains/loading.py (L386)),
causing runtime error when passing any keyword arguments.
2024-04-10 12:20:28 -07:00
ccurme
39471a9c87 docs: update tool calling cookbook (#20290)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-10 15:06:33 -04:00
Nuno Campos
15271ac832 core: mustache prompt templates (#19980)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-10 11:25:32 -07:00
Leonid Ganeline
4cb5f4c353 community[patch]: import flattening fix (#20110)
This PR should make it easier for linters to do type checking and for IDEs to jump to definition of code.

See #20050 as a template for this PR.
- As a byproduct: Added 3 missed `test_imports`.
- Added missed `SolarChat` in to __init___.py Added it into test_import
ut.
- Added `# type: ignore` to fix linting. It is not clear, why linting
errors appear after ^ changes.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-10 13:01:19 -04:00
Yuki Oshima
12190ad728 openai[patch]: Fix langchain-openai unknown parameter error with gpt-4-turbo (#20271)
**Description:** 

I fixed langchain-openai unknown parameter error with gpt-4-turbo.

It seems that the behavior of the Chat Completions API implicitly
changed when using the latest gpt-4-turbo model, differing from previous
models. It now appears to reject parameters that are not listed in the
[API
Reference](https://platform.openai.com/docs/api-reference/chat/create).
So I found some errors and fixed them.

**Issue:** https://github.com/langchain-ai/langchain/issues/20264

**Dependencies:** none

**Twitter handle:** https://twitter.com/oshima_123
2024-04-10 09:51:38 -07:00
ccurme
21c1ce0bc1 update agents to use tool call messages (#20074)
```python
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        MessagesPlaceholder("chat_history", optional=True),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad"),
    ]
)
model = ChatAnthropic(model="claude-3-opus-20240229")

@tool
def magic_function(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2

tools = [magic_function]

agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": "what is the value of magic_function(3)?"})
```
```
> Entering new AgentExecutor chain...

Invoking: `magic_function` with `{'input': 3}`
responded: [{'text': '<thinking>\nThe user has asked for the value of magic_function applied to the input 3. Looking at the available tools, magic_function is the relevant one to use here, as it takes an integer input and returns an integer output.\n\nThe magic_function has one required parameter:\n- input (integer)\n\nThe user has directly provided the value 3 for the input parameter. Since the required parameter is present, we can proceed with calling the function.\n</thinking>', 'type': 'text'}, {'id': 'toolu_01HsTheJPA5mcipuFDBbJ1CW', 'input': {'input': 3}, 'name': 'magic_function', 'type': 'tool_use'}]

5
Therefore, the value of magic_function(3) is 5.

> Finished chain.
{'input': 'what is the value of magic_function(3)?',
 'output': 'Therefore, the value of magic_function(3) is 5.'}
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-10 11:54:51 -04:00
Erick Friis
9eb6f538f0 infra, multiple: rc release versions (#20252) 2024-04-09 17:54:58 -07:00
Bagatur
0d0458d1a7 mistralai[patch]: Pre-release 0.1.2-rc.1 (#20251) 2024-04-10 00:25:38 +00:00
Bagatur
e4046939d0 anthropic[patch]: Pre-release 0.1.8-rc.1 (#20250) 2024-04-10 00:23:10 +00:00
Bagatur
a8eb0f5b1b openai[patch]: pre-release 0.1.3-rc.1 (#20249) 2024-04-10 00:22:08 +00:00
Bagatur
a43b9e4f33 core[patch]: Pre-release 0.1.42-rc.1 (#20248) 2024-04-09 19:10:38 -05:00
Bagatur
9514bc4d67 core[minor], ...: add tool calls message (#18947)
core[minor], langchain[patch], openai[minor], anthropic[minor], fireworks[minor], groq[minor], mistralai[minor]

```python
class ToolCall(TypedDict):
    name: str
    args: Dict[str, Any]
    id: Optional[str]

class InvalidToolCall(TypedDict):
    name: Optional[str]
    args: Optional[str]
    id: Optional[str]
    error: Optional[str]

class ToolCallChunk(TypedDict):
    name: Optional[str]
    args: Optional[str]
    id: Optional[str]
    index: Optional[int]


class AIMessage(BaseMessage):
    ...
    tool_calls: List[ToolCall] = []
    invalid_tool_calls: List[InvalidToolCall] = []
    ...


class AIMessageChunk(AIMessage, BaseMessageChunk):
    ...
    tool_call_chunks: Optional[List[ToolCallChunk]] = None
    ...
```
Important considerations:
- Parsing logic occurs within different providers;
- ~Changing output type is a breaking change for anyone doing explicit
type checking;~
- ~Langsmith rendering will need to be updated:
https://github.com/langchain-ai/langchainplus/pull/3561~
- ~Langserve will need to be updated~
- Adding chunks:
- ~AIMessage + ToolCallsMessage = ToolCallsMessage if either has
non-null .tool_calls.~
- Tool call chunks are appended, merging when having equal values of
`index`.
  - additional_kwargs accumulate the normal way.
- During streaming:
- ~Messages can change types (e.g., from AIMessageChunk to
AIToolCallsMessageChunk)~
- Output parsers parse additional_kwargs (during .invoke they read off
tool calls).

Packages outside of `partners/`:
- https://github.com/langchain-ai/langchain-cohere/pull/7
- https://github.com/langchain-ai/langchain-google/pull/123/files

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-09 18:41:42 -05:00
Erick Friis
00552918ac groq: xfail tool_choice tests (#20247) 2024-04-09 23:29:59 +00:00
Bagatur
2d83505be9 experimental[patch]: Release 0.0.57 (#20243) 2024-04-09 17:08:01 -05:00
Bagatur
f06cb59ab9 groq[patch]: Release 0.1.1 (#20242) 2024-04-09 21:59:58 +00:00
Erick Friis
ad3f1a9e85 docs: fix external repo partner docs (#20238) 2024-04-09 21:58:04 +00:00
Bagatur
0b2f0307d7 openai[patch]: Release 0.1.2 (#20241) 2024-04-09 21:55:19 +00:00
Bagatur
4b84c9b28c anthropic[patch]: Release 0.1.7 (#20240) 2024-04-09 21:53:16 +00:00
Bagatur
74d04a4e80 mistralai[patch]: Release 0.1.1 (#20239) 2024-04-09 21:53:01 +00:00
Bagatur
e5913c8758 langchain[patch]: Release 0.1.15 (#20237) 2024-04-09 21:50:32 +00:00
Bagatur
e39fdfddf1 community[patch]: Release 0.0.32 (#20236) 2024-04-09 21:37:10 +00:00
Bagatur
a07238d14e core[patch]: Release 0.1.41 (#20233) 2024-04-09 21:11:37 +00:00
Chip Davis
806d4ae48f community[patch]: fixed multithreading returning List[List[Documents]] instead of List[Documents] (#20230)
Description: When multithreading is set to True and using the
DirectoryLoader, there was a bug that caused the return type to be a
double nested list. This resulted in other places upstream not being
able to utilize the from_documents method as it was no longer a
`List[Documents]` it was a `List[List[Documents]]`. The change made was
to just loop through the `future.result()` and yield every item.
Issue: #20093
Dependencies: N/A
Twitter handle: N/A
2024-04-09 17:06:37 -04:00
Sholto Armstrong
230376f183 docs: Fix typo in citations example (#20218)
Small typo in the citations notebook "ojbects" changed to "objects"
2024-04-09 21:05:33 +00:00
Eugene Yurtsev
fe35e13083 langchain[patch]: Update unit test (#20228)
This unit test fails likely validation by the openai client.

Newer openai library seems to be doing more validation so the existing
test fails since http_client needs to be of httpx instance
2024-04-09 16:44:23 -04:00
Casper da Costa-Luis
b972f394c8 langchain[patch]: make BooleanOutputParser check words not substrings (#20064)
- **Description**: fixes BooleanOutputParser detecting sub-words ("NOW
this is likely (YES)" -> `True`, not `AmbiguousError`)
- **Issue(s)**: fixes #11408 (follow-up to #17810)
- **Dependencies**: None
- **GitHub handle**: @casperdcl

<!-- if unreviewd after a few days, @-mention one of baskaryan, efriis,
eyurtsev, hwchase17 -->

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-09 20:43:31 +00:00
seray
add31f46d0 community[patch]: OpenLLM Async Client Fixes and Timeout Parameter (#20007)
Same changes as this merged
[PR](https://github.com/langchain-ai/langchain/pull/17478)
(https://github.com/langchain-ai/langchain/pull/17478), but for the
async client, as the same issues persist.

- Replaced 'responses' attribute of OpenLLM's GenerationOutput schema to
'outputs'.
reference:
66de54eae7/openllm-core/src/openllm_core/_schemas.py (L135)

- Added timeout parameter for the async client.

---------

Co-authored-by: Seray Arslan <seray.arslan@knime.com>
2024-04-09 16:34:56 -04:00
Erick Friis
37a9e23c05 community: switch to falkordb python client (#20229) 2024-04-09 20:19:44 +00:00
Christophe Bornet
f43b48aebc core[minor]: Implement aformat_messages for _StringImageMessagePromptTemplate (#20036) 2024-04-09 15:59:39 -04:00
Christophe Bornet
19001e6cb9 core[minor]: Implement aformat for FewShotPromptWithTemplates (#20039) 2024-04-09 15:58:41 -04:00
Erick Friis
855ba46f80 standard-tests: a standard unit and integration test set (#20182)
just chat models for now
2024-04-09 12:43:00 -07:00
Erick Friis
9b5cae045c together: release 0.1.0 (#20225)
Resolved #20217
2024-04-09 12:23:52 -07:00
Eugene Yurtsev
7cfb643a1c langchain-postgres: Remove remaining README.md file (#20221)
Repository has moved to langchain-ai/langchain-postgres
2024-04-09 14:02:15 -04:00
Eugene Yurtsev
2fa7266ebb Remove postgres package (#20207)
Package moved
2024-04-09 13:51:17 -04:00
Simon Kelly
a682f0d12b openai[patch]: wrap stream code in context manager blocks (#18013)
**Description:**
Use the `Stream` context managers in `ChatOpenAi` `stream` and `astream`
method.

Using the context manager returned by the OpenAI client makes it
possible to terminate the stream early since the response connection
will be closed when the context manager exists.

**Issue:** #5340
**Twitter handle:** @snopoke

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-09 17:40:16 +00:00
Shotaro Sano
6c11c8dac6 docs: Add documentation of ElasticsearchStore.BM25RetrievalStrategy (#20098)
This pull request follows up on
https://github.com/langchain-ai/langchain/pull/19314 and
https://github.com/langchain-ai/langchain-elastic/pull/6, adding
documentation for the `ElasticsearchStore.BM25RetrievalStrategy`.

Like other retrieval strategies, we are now introducing
BM25RetrievalStrategy.

### Background
- The `BM25RetrievalStrategy` has been introduced to `langchain-elastic`
via the pull request
https://github.com/langchain-ai/langchain-elastic/pull/6.
- This PR was initially created in the main `langchain` repository but
was moved to `langchain-elastic` during the review process due to the
migration of the partner package.
- The original PR can be found at
https://github.com/langchain-ai/langchain/pull/19314.
- As
[commented](https://github.com/langchain-ai/langchain/pull/19314#issuecomment-2023202401)
by @joemcelroy, documenting the new retrieval strategy is part of the
requirements for its introduction.

Although the `BM25RetrievalStrategy` has been merged into
`langchain-elastic`, its documentation is still to be maintained in the
main `langchain` repository. Therefore, this pull request adds the
documentation portion of `BM25RetrievalStrategy`.

The content of the documentation remains the same as that included in
the original PR, https://github.com/langchain-ai/langchain/pull/19314.

---------

Co-authored-by: Max Jakob <max.jakob@elastic.co>
2024-04-09 12:37:15 -05:00
David Lee
0394c6e126 community[minor]: add allow_dangerous_requests for OpenAPI toolkits (#19493)
**OpenAPI allow_dangerous_requests**: community: add
allow_dangerous_requests for OpenAPI toolkits

**Description:** a description of the change

Due to BaseRequestsTool changes, we need to pass
allow_dangerous_requests manually.


b617085af0/libs/community/langchain_community/tools/requests/tool.py (L26-L46)

While OpenAPI toolkits didn't pass it in the arguments.


b617085af0/libs/community/langchain_community/agent_toolkits/openapi/planner.py (L262-L269)


**Issue:** the issue # it fixes, if applicable

https://github.com/langchain-ai/langchain/issues/19440

If not passing allow_dangerous_requests, it won't be able to do
requests.

**Dependencies:** any dependencies required for this change

Not much

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-09 17:14:02 +00:00
Guangdong Liu
301dc3dfd2 docs: Get rid of ZeroShotAgent and use create_react_agent instead (#20157)
- **Issue:** #20122
 -  @baskaryan, @eyurtsev.
2024-04-09 12:00:29 -05:00
Timothy
0c848a25ad community[patch]: GCSDirectoryLoader bugfix (#20005)
- **Description:** Bug fix. Removed extra line in `GCSDirectoryLoader`
to allow catching Exceptions. Now also logs the file path if Exception
is raised for easier debugging.
- **Issue:** #20198 Bug since langchain-community==0.0.31
- **Dependencies:** No change
- **Twitter handle:** timothywong731

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-09 16:57:00 +00:00
jeff kit
ac42e96e4c community[patch], langchain[minor]: Enhance Tencent Cloud VectorDB, langchain: make Tencent Cloud VectorDB self query retrieve compatible (#19651)
- make Tencent Cloud VectorDB support metadata filtering.
- implement delete function for Tencent Cloud VectorDB.
- support both Langchain Embedding model and Tencent Cloud VDB embedding
model.
- Tencent Cloud VectorDB support filter search keyword, compatible with
langchain filtering syntax.
- add Tencent Cloud VectorDB TranslationVisitor, now work with self
query retriever.
- more documentations.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-09 16:50:48 +00:00
Bagatur
1a34c65e01 community[patch]: pass through sql agent kwargs (#19962)
Fix #19961
2024-04-09 16:47:32 +00:00
Haris Ali
1b480914b4 docs: Fix the class links in openai_tools and openai_functions description in output parser documentations (#20197)
- **Description:** In this PR I fixed the links which points to the API
docs for classes in OpenAI functions and OpenAI tools section of output
parsers.
  - **Issue:** It fixed the issue #19969

Co-authored-by: Haris Ali <haris.ali@formulatrix.com>
2024-04-09 16:07:19 +00:00
Guangdong Liu
97d91ec17c community[patch]: standardize baichuan init args (#20209)
Related to https://github.com/langchain-ai/langchain/issues/20085

@baskaryan
2024-04-09 11:00:40 -05:00
Piyush Jain
cd7abc495a community[minor]: add neptune analytics graph (#20047)
Replacement for PR
[#19772](https://github.com/langchain-ai/langchain/pull/19772).

---------

Co-authored-by: Dave Bechberger <dbechbe@amazon.com>
Co-authored-by: bechbd <bechbd@users.noreply.github.com>
2024-04-09 09:20:59 -05:00
Shuqian
ad9750403b community[minor]: add bedrock anthropic callback for token usage counting (#19864)
**Description:** add bedrock anthropic callback for token usage
counting, consulted openai callback.

---------

Co-authored-by: Massimiliano Pronesti <massimiliano.pronesti@gmail.com>
2024-04-09 09:18:48 -05:00
Prince Canuma
1f9f4d8742 community[minor]: Add support for MLX models (chat & llm) (#18152)
**Description:** This PR adds support for MLX models both chat (i.e.,
instruct) and llm (i.e., pretrained) types/
**Dependencies:** mlx, mlx_lm, transformers
**Twitter handle:** @Prince_Canuma

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-09 14:17:07 +00:00
aditya thomas
6baeaf4802 docs: TogetherAI as a drop-in replacement for OpenAI (#19900)
**Description:** TogetherAI as a drop-in replacement for OpenAI
**Issue:** None
**Dependencies:** None

@baskaryan apropos #20032
2024-04-09 09:12:52 -05:00
Leonid Ganeline
2f8dd1a161 community[patch]: cross_encoders flatten namespaces (#20183)
Issue `langchain_community.cross_encoders` didn't have flattening
namespace code in the __init__.py file.
Changes:
- added code to flattening namespaces (used #20050 as a template)
- added ut for a change
- added missed `test_imports` for `chat_loaders` and
`chat_message_histories` modules
2024-04-08 20:50:23 -04:00
Bagatur
1af7133828 docs: add vertexai to structured output (#20171) 2024-04-08 16:09:49 -05:00
kaijietti
a812839f0c community: add request_timeout and max_retries to ChatAnthropic (#19402)
This PR make `request_timeout` and `max_retries` configurable for
ChatAnthropic.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-08 21:04:17 +00:00
Richmond Alake
c769421aa4 cookbook: MongoDB Cookbook for Chat history and semantic cache (#19998)
Thank you for contributing to LangChain!

- [ ] **PR title**: "community: Add semantic caching and memory using
MongoDB"


- [ ] **PR message**: 
- **Description:** This PR introduces functionality for adding semantic
caching and chat message history using MongoDB in RAG applications. By
leveraging the MongoDBCache and MongoDBChatMessageHistory classes,
developers can now enhance their retrieval-augmented generation
applications with efficient semantic caching mechanisms and persistent
conversation histories, improving response times and consistency across
chat sessions.
    - **Issue:** N/A
- **Dependencies:** Requires `datasets`, `langchain`,
`langchain-mongodb`, `langchain-openai`, `pymongo`, and `pandas` for
implementation. MongoDB Atlas is used for database services, and the
OpenAI API for model access.
    - **Twitter handle:** @richmondalake

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-08 20:21:24 +00:00
Erick Friis
391e8f2050 pinecone[patch]: fix core min version (#20177) 2024-04-08 20:06:59 +00:00
Harry Jiang
1ee208541c langchain: fix pinecone upsert when async_req is set to False (#19793)
Issue: 
When async_req is the default value True, pinecone client return the
multiprocessing AsyncResult object.
When async_req is set to False, pinecone client return the result
directly. `[{'upserted_count': 1}]` . Calling get() method will throw an
error in this case.
2024-04-08 12:55:59 -07:00
Alex Sherstinsky
5f563e040a community: extend Predibase integration to support fine-tuned LLM adapters (#19979)
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Langchain-Predibase integration was failing, because
it was not current with the Predibase SDK; in addition, Predibase
integration tests were instantiating the Langchain Community `Predibase`
class with one required argument (`model`) missing. This change updates
the Predibase SDK usage and fixes the integration tests.
    - **Twitter handle:** `@alexsherstinsky`


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-08 18:54:29 +00:00
Bagatur
a27d88f12a anthropic[patch]: standardize init args (#20161)
Related to #20085
2024-04-08 12:09:06 -05:00
Bagatur
3490d70238 mistralai[patch]: standardize model params (#20163)
Related to #20085
2024-04-08 11:48:38 -05:00
Bagatur
17182406f3 docs: standardize fireworks params (#20162)
Related to #20085
2024-04-08 10:57:56 -05:00
Bagatur
5ae0e687b3 docs: use standard openai params (#20160)
Part of #20085
2024-04-08 10:56:53 -05:00
david02871
e1a24d09c5 community: Add PHP language parser to document_loaders (#19850)
**Description:**
Added a PHP language parser to document_loaders
**Issue:** N/A
**Dependencies:** N/A
**Twitter handle:** N/A

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-08 11:30:28 -04:00
Marlene
2f03bc397e Community: Updating Azure Retriever and Docs to be Azure AI Search instead of Azure Cognitive Search (#19925)
Last year Microsoft [changed the
name](https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search)
of Azure Cognitive Search to Azure AI Search. This PR updates the
Langchain Azure Retriever API and it's associated docs to reflect this
change. It may be confusing for users to see the name Cognitive here and
AI in the Microsoft documentation which is why this is needed. I've also
added a more detailed example to the Azure retriever doc page.

There are more places that need a similar update but I'm breaking it up
so the PRs are not too big 😄 Fixing my errors from the previous PR.

Twitter: @marlene_zw

Two new tests added to test backward compatibility in
`libs/community/tests/integration_tests/retrievers/test_azure_cognitive_search.py`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-08 11:12:41 -04:00
Rahul Triptahi
820b713086 community[minor]: Add support for Pebblo cloud_api_key in PebbloSafeLoader (#19855)
**Description**:
_PebbloSafeLoader_: Add support for pebblo's cloud api-key in
PebbloSafeLoader

- This Pull request enables PebbloSafeLoader to accept pebblo's cloud
api-key and send the semantic classification data to pebblo cloud.

**Documentation**: Updated 
**Unit test**: Added
**Issue**: NA
**Dependencies**: - None
**Twitter handle**: @rahul_tripathi2

Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-04-08 11:10:04 -04:00
Eugene Yurtsev
34a24d4df6 postgres[minor]: Add pgvector community as is (#20096)
This moves langchain pgvector community as is

The only modification is support for psycopg3 rather than psycopg2!
2024-04-08 09:34:10 -04:00
Eugene Yurtsev
ba9e0d76c1 postgres[minor]: add postgres checkpoint implementation (#20025)
Adds checkpoint implementation using psycopg
2024-04-08 09:27:15 -04:00
William FH
039b7a472d [core] fix: manually specifying run_id for chat models.invoke() and .ainvoke() (#20082) 2024-04-06 16:57:32 -07:00
Chris Germann
ba602dc562 Documentation: Fixed the typo of Discord -> Telegram (#20008)
Description: Just fixed one string
Issues: None
Dependencies: None
Twitter handle: @epu9byj

Co-authored-by: gere <gere@kapo.zh.ch>
2024-04-06 20:00:03 +00:00
Erick Friis
96dc0ea49d pinecone[patch]: release 0.1.0 (#20109) 2024-04-06 18:41:28 +00:00
donbr
de496062b3 templates: migrate to langchain_anthropic package to support Claude 3 models (#19393)
- **Description:** update langchain anthropic templates to support
Claude 3 (iterative search, chain of note, summarization, and XML
response)
- **Issue:** issue # N/A. Stability issues and errors encountered when
trying to use older langchain and anthropic libraries.
- **Dependencies:**
  - langchain_anthropic version 0.1.4\
- anthropic package version in the range ">=0.17.0,<1" to support
langchain_anthropic.
- **Twitter handle:** @d_w_b7


- [ x]**Add tests and docs**: If you're adding a new integration, please
include
  1. used instructions in the README for testing

- [ x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-06 00:33:59 +00:00
Maxime Perrin
5ac0d1f67b partners[anthropic]: fix anthropic chat model message type lookup keys (#19034)
- **Description:** Fixing message formatting issue in ChatAnthropic
model by adding dictionary keys for `AIMessageChunk `and
`HumanMessageChunk`
  - **Issue:** #19025 
  - **Twitter handle:** @maximeperrin_

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-06 00:22:14 +00:00
Krista Pratico
d64bd32b20 templates: add rag azure search template (#18143)
- **Description:** Adds a template for performing RAG with the
AzureSearch vectorstore.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** N/A

---------

Co-authored-by: Erick Friis <erickfriis@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-06 00:20:40 +00:00
Bagatur
46f580d42d docs: anthropic tool docstring (#20091) 2024-04-05 21:50:40 +00:00
Erick Friis
28dfde2cb2 cohere: move package to external repo (#20081) 2024-04-05 14:29:15 -07:00
Jacob Lee
58a2123ca0 docs[patch]: Add missing redirects (#20076) 2024-04-05 12:54:00 -07:00
Eugene Yurtsev
520ff50adc community[patch]: Improve import callbacks to make it IDE friendly (#20050)
* declares __all__ as a list of strings (instead of dynamically
computing it)
* import type definitions when TYPE_CHECKING is true
2024-04-05 15:17:51 -04:00
Guangdong Liu
5a76087965 langchain-core[minor]: Allow passing local cache to language models (#19331)
After this PR it will be possible to pass a cache instance directly to a
language model. This is useful to allow different language models to use
different caches if needed.

- **Issue:** close #19276

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-05 11:19:54 -04:00
Eugene Yurtsev
e4fc0e7502 core[patch]: Document BaseCache abstraction in code (#20046)
Document the base cache abstraction in the cache.
2024-04-05 10:56:57 -04:00
Christophe Bornet
4d8a6a27a3 core[minor]: Implement aformat_prompt and ainvoke in BasePromptTemplate (#20035) 2024-04-05 10:36:43 -04:00
Christophe Bornet
7e5c1905b1 core[minor]: Add async aformat_document method (#20037) 2024-04-05 10:29:53 -04:00
Christophe Bornet
927793d088 Merge pull request #20038
* Implement aformat_messages for ChatMessagePromptTemplate
2024-04-05 10:25:27 -04:00
Erick Friis
ebd24bb5d6 docs: fix title cap (#20048) 2024-04-05 02:36:33 +00:00
Eugene Yurtsev
1ee8cf7b20 Docs: Update custom chat model (#19967)
* Clean up in the existing tutorial
* Add model_name to identifying params
* Add table to summarize messages
2024-04-04 22:36:03 -04:00
Erick Friis
5fc7bb01e9 docs: weaviate docs (#20042) 2024-04-04 19:01:02 -07:00
Bagatur
38fb1429fe docs: fix together model tab (#20032) 2024-04-04 15:33:43 -07:00
Jacob Lee
b69af26717 docs[patch]: Fix Model I/O quickstart (#20031)
@baskaryan
2024-04-04 15:28:58 -07:00
Usama Ahmed
94ac42c573 docs: fixing typo in argument name (#20028)
it's "mode" instead of "model", I fixed it
2024-04-04 22:28:28 +00:00
Bagatur
07eeeb84f3 docs: hide experimental anthropic (#20030) 2024-04-04 15:27:52 -07:00
Lance Martin
e76b9210dd Update example cookbook for Anthropic tool use (#20029) 2024-04-04 14:53:18 -07:00
Leonid Ganeline
3856dedff4 docs: integrations/providers update 9 (#19941)
- Added missed providers
- Added links, descriptions in related examples
- Formatted in a consistent format

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-04 21:37:48 +00:00
Bagatur
644ff46100 docs: mark anthropic tools wrapper as deprecated (#20024) 2024-04-04 21:33:55 +00:00
Leonid Ganeline
69bf6262aa docs: integrations/providers/unstructured update (#19892)
Updated a page with existing document loaders with links to examples.
Fixed formatting of one example.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-04 21:31:27 +00:00
Bagatur
1b7ed6071a anthropic[patch]: Release 0.1.6 (#20026) 2024-04-04 14:29:50 -07:00
Bagatur
6860450e48 anthropic[patch]: use anthropic 0.23 (#20022) 2024-04-04 14:23:53 -07:00
Leonid Ganeline
4c969286fe docs integrations/providers update 10 (#19970)
Fixed broken links. Formatted to get consistent forms. Added missed
imports in the example code
2024-04-04 14:22:45 -07:00
Leonid Ganeline
82f0198be2 docs: graphs update (#19675)
Issue: The `graph` code was moved into the `community` package a long
ago. But the related documentation is still in the
[use_cases](https://python.langchain.com/docs/use_cases/graph/integrations/diffbot_graphtransformer)
section and not in the `integrations`.
Changes:
- moved the `use_cases/graph/integrations` notebooks into the
`integrations/graphs`
- renamed files and changed titles to follow the consistent format
- redirected old page URLs to new URLs in `vercel.json` and in several
other pages
- added descriptions and links when necessary
- formatted into the consistent format
2024-04-04 14:13:22 -07:00
Bagatur
be3dd62de4 anthropic[patch]: fix experimental tests (#20021) 2024-04-04 13:37:43 -07:00
Lance Martin
a6926772f0 Add cookbook for Anthropic .with_structured_output() (#20017) 2024-04-04 13:30:44 -07:00
Bagatur
86fdb79454 anthropic[patch]: bump core dep (#20019)
]
2024-04-04 13:28:23 -07:00
Bagatur
209de0a561 anthropic[minor]: tool use (#20016) 2024-04-04 13:22:48 -07:00
Leonid Ganeline
3aacd11846 community[minor]: added missed class to __all__ (#19888)
Added missed `UnstructuredCHMLoader` class to the
document_loader.\_\_init\_\_.py \_\_all\_\_
2024-04-04 16:16:51 -04:00
Jacob Lee
7f0cb3bfba docs[patch]: Make Docusaurus and Vercel add trailing slashes when navigating by default (#20014)
Should hopefully avoid weird broken link edge cases.

Relative links now trip up the Docusaurus broken link checker, so this
PR also removes them.

Also snuck in a small addition about asyncio
2024-04-04 12:49:15 -07:00
Chris Papademetrious
a954dedb77 langchain[minor]: enhance LocalFileStore to allow directory/file permissions to be specified (#18857)
**Description:**
The `LocalFileStore` class can be used to create an on-disk
`CacheBackedEmbeddings` cache. However, the default `umask` settings
gives file/directory write permissions only to the original user. Once
the cache directory is created by the first user, other users cannot
write their own cache entries into the directory.

To make the cache usable by multiple users, this pull request updates
the `LocalFileStore` constructor to allow the permissions for newly
created directories and files to be specified. The specified permissions
override the default `umask` values.

For example, when configured as follows:

```python
file_store = LocalFileStore(temp_dir, chmod_dir=0o770, chmod_file=0o660)
```

then "user" and "group" (but not "other") have permissions to access the
store, which means:

* Anyone in our group could contribute embeddings to the cache.
* If we implement cache cleanup/eviction in the future, anyone in our
group could perform the cleanup.

The default values for the `chmod_dir` and `chmod_file` parameters is
`None`, which retains the original behavior of using the default `umask`
settings.

**Issue:**
Implements enhancement #18075.

**Testing:**
I updated the `LocalFileStore` unit tests to test the permissions.

---------

Signed-off-by: chrispy <chrispy@synopsys.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-04 16:40:16 +00:00
Tomaz Bratanic
df25829f33 community[minor]: Add metadata filtering support for neo4j vector (#20001) 2024-04-04 11:37:06 -04:00
Ben Mitchell
b52b78478f community[minor]: Implement Async OpenSearch afrom_texts & afrom_embeddings (#20009)
- **Description:** Adds async variants of afrom_texts and
afrom_embeddings into `OpenSearchVectorSearch`, which allows for
`afrom_documents` to be called.
- **Issue:** I implemented this because my use case involves an async
scraper generating documents as and when they're ready to be ingested by
Embedding/OpenSearch
- **Dependencies:** None that I'm aware

Co-authored-by: Ben Mitchell <b.mitchell@reply.com>
2024-04-04 15:36:14 +00:00
Christophe Bornet
02152d3909 [docs][minor]: Fix typo in Custom Document Loader doc (#20003) 2024-04-04 10:59:33 -04:00
Jan Nissen
31e3ecc728 core[minor]: support pydantic V2 for JSONOutputParser, allow for other sources of JSON schemas (#19716)
This PR supports using Pydantic v2 objects to generate the schema for
the JSONOutputParser (#19441). This also adds a `json_schema` parameter
to allow users to pass any JSON schema to validate with, not just
pydantic.
2024-04-04 10:57:47 -04:00
Christophe Bornet
f97de4e275 core[minor]: Add aformat to FewShotPromptTemplate (#19652) 2024-04-04 10:24:55 -04:00
Utkarsha Gupte
b27f81c51c core[patch]: mypy ignore fixes #17048 (#19931)
core/langchain_core/_api[Patch]: mypy ignore fixes #17048
Related to #17048

Applied mypy fixes to below two files:
libs/core/langchain_core/_api/deprecation.py
libs/core/langchain_core/_api/beta_decorator.py

Summary of Fixes:
**Issue 1**
class _deprecated_property(type(obj)): # type: ignore
error: Unsupported dynamic base class "type"  [misc]
Fix: 
1. Added an __init__ method to _deprecated_property to initialize the
fget, fset, fdel, and __doc__ attributes.
2. In the __get__, __set__, and __delete__ methods, we now use the
self.fget, self.fset, and self.fdel attributes to call the original
methods after emitting the warning.

3. The finalize function now creates an instance of _deprecated_property
with the fget, fset, fdel, and doc attributes from the original obj
property.



**Issue 2**



 def finalize(  # type: ignore
                wrapper: Callable[..., Any], new_doc: str
            ) -> T:


error: All conditional function variants must have identical
signatures



Fix:
Ensured that both definitions of the finalize function have the
same signature

Twitter Handle -
https://x.com/gupteutkarsha?s=11&t=uwHe4C3PPpGRvoO5Qpm1aA
2024-04-04 10:22:38 -04:00
harry-cohere
e103492eb8 cohere: Add citations to agent, flexibility to tool parsing, fix SDK issue (#19965)
**Description:** Citations are the main addition in this PR. We now emit
them from the multihop agent! Additionally the agent is now more
flexible with observations (`Any` is now accepted), and the Cohere SDK
version is bumped to fix an issue with the most recent version of
pydantic v1 (1.10.15)
2024-04-04 07:02:30 -07:00
Jacob Lee
605c3f23e1 docs: reorg and visual refresh (#19765)
- put use cases in main sidebar
- move modules to own sidebar, rename components
- cleanup lcel section
- cleanup guides
- update font, cell highlighting

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-04 00:58:36 -07:00
Erick Friis
51bdfe04e9 groq: handle streaming tool call case (#19978) 2024-04-03 15:22:59 -07:00
Erick Friis
5acb564d6f groq: fix core version (#19976) 2024-04-03 14:49:57 -07:00
Erick Friis
9e60159043 groq: release 0.1.0 (#19975) 2024-04-03 14:41:48 -07:00
Graden Rea
88cf8a2905 groq: Add tool calling support (#19971)
**Description:** Add with_structured_output to groq chat models
**Issue:** 
**Dependencies:** N/A
**Twitter handle:** N/A
2024-04-03 14:40:20 -07:00
Eugene Yurtsev
6f20f140ca cli[minor]: Add disable sockets in unit tests (#19877) 2024-04-03 17:17:50 -04:00
Eugene Yurtsev
ea276d6547 docs: Custom Document Loaders (#19935)
Add information that shows how to create custom document loaders
2024-04-03 15:34:01 -04:00
Erick Friis
83f62fdacf core: fix try_load_from_hub for older langchain versions load_chain (#19964) 2024-04-03 17:00:25 +00:00
Tomaz Bratanic
09a0ecd000 langchain[minor]: Tests update metadata filtering examples of documents (#19963)
Removing metadata properties that are dicts as some databases don't
support that, and those properties aren't used in tests anyhow..
2024-04-03 12:44:14 -04:00
happy-go-lucky
c6432abdbe community[patch]: Implement delete method and all async methods in opensearch_vector_search (#17321)
- **Description:** In order to use index and aindex in
libs/langchain/langchain/indexes/_api.py, I implemented delete method
and all async methods in opensearch_vector_search
- **Dependencies:** No changes
2024-04-03 09:40:49 -07:00
Cheng, Penghui
cc407e8a1b community[minor]: weight only quantization with intel-extension-for-transformers. (#14504)
Support weight only quantization with intel-extension-for-transformers.
[Intel® Extension for
Transformers](https://github.com/intel/intel-extension-for-transformers)
is an innovative toolkit to accelerate Transformer-based models on Intel
platforms, in particular effective on 4th Intel Xeon Scalable processor
[Sapphire
Rapids](https://www.intel.com/content/www/us/en/products/docs/processors/xeon-accelerated/4th-gen-xeon-scalable-processors.html)
(codenamed Sapphire Rapids). The toolkit provides the below key
features:

* Seamless user experience of model compressions on Transformer-based
models by extending [Hugging Face
transformers](https://github.com/huggingface/transformers) APIs and
leveraging [Intel® Neural
Compressor](https://github.com/intel/neural-compressor)
* Advanced software optimizations and unique compression-aware runtime.
* Optimized Transformer-based model packages.
*
[NeuralChat](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat),
a customizable chatbot framework to create your own chatbot within
minutes by leveraging a rich set of plugins and SOTA optimizations.
*
[Inference](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/llm/runtime/graph)
of Large Language Model (LLM) in pure C/C++ with weight-only
quantization kernels.
This PR is an integration of weight only quantization feature with
intel-extension-for-transformers.

Unit test is in
lib/langchain/tests/integration_tests/llm/test_weight_only_quantization.py
The notebook is in
docs/docs/integrations/llms/weight_only_quantization.ipynb.
The document is in
docs/docs/integrations/providers/weight_only_quantization.mdx.

---------

Signed-off-by: Cheng, Penghui <penghui.cheng@intel.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-03 16:21:34 +00:00
Eugene Yurtsev
d6d843ec24 langchain-postgres: Initial package with postgres chat history implementation (#19884)
- [x] Add in code examples for the chat message history class
- [ ] ~Add docs with notebook examples~ (can this be done later?)
- [x] Update README.md
2024-04-03 10:57:21 -04:00
Eugene Yurtsev
d293431e10 core[minor]: Add aload to document loader (#19936)
Add aload to document loader
2024-04-03 10:46:47 -04:00
Ángel Igareta
31a641a155 core: fix return of draw_mermaid_png and change to not save image by default (#19950)
- **Description:** Improvement for #19599: fixing missing return of
graph.draw_mermaid_png and improve it to make the saving of the rendered
image optional

Co-authored-by: Angel Igareta <angel.igareta@klarna.com>
2024-04-03 06:20:35 -07:00
Bagatur
4328c54aab core[patch]: Release 0.1.39 (#19940) 2024-04-03 00:25:56 +00:00
Nuno Campos
f4568fe0c6 core: BaseChatModel modify chat message before passing to run_manager (#19939)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-02 16:40:27 -07:00
aditya thomas
73ebe78249 docs: update cohere documentation (#19700)
**Description:** Update of Cohere documentation (main provider page)
**Issue:** After addition of the Cohere partner package, the
documentation was out of date
**Dependencies:** None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-02 18:16:48 -04:00
Leonid Kuligin
eb0521064e deprecating integrations moved to langchain_google_community (#19841)
Thank you for contributing to LangChain!

- [ ] **PR title**: "community: deprecating integrations moved to
langchain_google_community"

- [ ] **PR message**: deprecating integrations moved to
langchain_google_community

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-04-02 17:06:07 -04:00
Erick Friis
f0d5b59962 core[patch]: remove requests (#19891)
Removes required usage of `requests` from `langchain-core`, all of which
has been deprecated.

- removes Tracer V1 implementations
- removes old `try_load_from_hub` github-based hub implementations

Removal done in a way where imports will still succeed, and usage will
fail with a `RuntimeError`.
2024-04-02 20:28:10 +00:00
Erick Friis
d5a2ff58e9 pinecone[patch]: source tag (#19739) 2024-04-02 19:53:59 +00:00
Wang Guan
8638029a37 docs: mention caveats with CacheBackedEmbeddings.embed_query (#19926)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**:
- **Description:** mention not-caching methods in CacheBackedEmbeddings
  - **Issue:** n/a I almost created one until I read the code 
  - **Dependencies:** n/a
  - **Twitter handle:** `tarsylia`


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-02 19:19:29 +00:00
harry-cohere
beab9adffb cohere: Improve integration test stability, fix documents bug (#19929)
**Description**: Improves the stability of all Cohere partner package
integration tests. Fixes a bug with document parsing (both dicts and
Documents are handled).
2024-04-02 11:22:30 -07:00
harry-cohere
37fc1c525a cohere: simplify integration test (#19928)
**Description**: This PR simplifies an integration test within the
Cohere partner package:
 * It no longer relies on exact model answers
 * It no longer relies on a third party tool
2024-04-02 10:57:25 -07:00
billytrend-cohere
de6c0cf248 cohere, docs: update imports and installs to langchain_cohere (#19918)
cohere: update imports and installs to langchain_cohere

---------

Co-authored-by: Harry M <127103098+harry-cohere@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-02 09:47:58 -07:00
Erick Friis
146d1a6347 cohere[patch]: release 0.1.0rc2 (#19924) 2024-04-02 16:24:23 +00:00
harry-cohere
e2b83c87b1 cohere[patch]: Add multihop tool agent (#19919)
**Description**: Adds an agent that uses Cohere with multiple hops and
multiple tools.

This PR is a continuation of
https://github.com/langchain-ai/langchain/pull/19650 - which was
previously approved. Conceptually nothing has changed, but this PR has
extra fixes, documentation and testing.

---------

Co-authored-by: BeatrixCohere <128378696+BeatrixCohere@users.noreply.github.com>
Co-authored-by: Erick Friis <erickfriis@gmail.com>
2024-04-02 09:18:50 -07:00
Max Jakob
22dbcc9441 langchain[patch]: fix ElasticsearchStore reference for self query (#19907)
Initializing self query with an ElasticsearchStore from the partners
packages failed previously, see
https://github.com/langchain-ai/langchain/discussions/18976.
2024-04-02 08:39:12 -07:00
Bagatur
3218463f6a core[patch]: Release 0.1.38 (#19895) 2024-04-01 22:47:46 -07:00
Mohammad Mohtashim
9ae2df36fc Core[major]: Base Tracer to propagate raw output from tool for on_tool_end (#18932)
This PR completes work for PR #18798 to expose raw tool output in
on_tool_end.

Affected APIs:
* astream_log
* astream_events
* callbacks sent to langsmith via langsmith-sdk
* Any other code that relies on BaseTracer!

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-02 01:24:46 +00:00
Nuno Campos
2ae6dcdf01 core: Assign missing message ids in BaseChatModel (#19863)
- This ensures ids are stable across streamed chunks
- Multiple messages in batch call get separate ids
- Also fix ids being dropped when combining message chunks

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-02 01:18:36 +00:00
Peter Vandenabeele
e830a4e731 community[patch]: Add remove_comments option (default True): do not extract html comments (#13259)
- **Description:** add `remove_comments` option (default: True): do not
extract html _comments_,
  - **Issue:** None,
  - **Dependencies:** None,
  - **Tag maintainer:** @nfcampos ,
  - **Twitter handle:** peter_v

I ran `make format`, `make lint` and `make test`.

Discussion: I my use case, I prefer to not have the comments in the
extracted text:
* e.g. from a Google tag that is added in the html as comment
* e.g. content that the authors have temporarily hidden to make it non
visible to the regular reader

Removing the comments makes the extracted text more alike the intended
text to be seen by the reader.


**Choice to make:** do we prefer to make the default for this
`remove_comments` option to be True or False?
I have changed it to True in a second commit, since that is how I would
prefer to use it by default. Have the
cleaned text (without technical Google tags etc.) and also closer to the
actually visible and intended content.
I am not sure what is best aligned with the conventions of langchain in
general ...


INITIAL VERSION (new version above):
~**Choice to make:** do we prefer to make the default for this
`ignore_comments` option to be True or False?
I have set it to False now to be backwards compatible. On the other
hand, I would use it mostly with True.
I am not sure what is best aligned with the conventions of langchain in
general ...~

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-02 00:19:12 +00:00
Jamsheed Mistri
4f70bc119d community[minor]: add Layerup Security integration (#19787)
**Description:** adds integration with [Layerup
Security](https://uselayerup.com). Docs can be found
[here](https://docs.uselayerup.com). Integrates directly with our Python
SDK.

**Dependencies:**
[LayerupSecurity](https://pypi.org/project/LayerupSecurity/)

**Note**: all methods for our product require a paid API key, so I only
included 1 test which checks for an invalid API key response. I have
tested extensively locally.

**Twitter handle**: [@layerup_](https://twitter.com/layerup_)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-01 23:49:00 +00:00
Brace Sproul
22f78c37c8 docs[patch]: Hide google from function calling docs (#19887) 2024-04-01 14:26:31 -07:00
Massimiliano Pronesti
06dac394a6 cohere[patch]: support request timeout in BaseCohere (#19641)
As in #19346, this PR exposes `request_timeout` in `BaseCohere`, while
`max_retires` is no longer a parameter of the beneath client
(`cohere.Client`) and it is already configured in
`langchain_cohere.llms.Cohere`.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-01 14:16:32 -07:00
Mayank Solanki
d5c412b0a9 core: Add docs for RunnableConfigurableFields (#19849)
- [x] **docs**: core: Add docs for `RunnableConfigurableFields`

- **Description:** Added incode docs for `RunnableConfigurableFields`
with example
    - **Issue:** #18803 
    - **Dependencies:** NA
    - **Twitter handle:** NA

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-01 20:40:10 +00:00
Mahdi Setayesh
c28efb878c text-splitters[minor]: Adding a new section aware splitter to langchain (#16526)
- **Description:** the layout of html pages can be variant based on the
bootstrap framework or the styles of the pages. So we need to have a
splitter to transform the html tags to a proper layout and then split
the html content based on the provided list of tags to determine its
html sections. We are using BS4 library along with xslt structure to
split the html content using an section aware approach.
  - **Dependencies:** No new dependencies
  - **Twitter handle:** @m_setayesh

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-01 20:32:26 +00:00
Eugene Yurtsev
356a139b0a cli[minor]: Add __version__ to integration package template (#19876)
Packages should export __version__
2024-04-01 15:34:38 -04:00
northern-64bit
dfbc10c943 docs: Fix link in Unstructured notebook (#19851)
**Description:** This PR fixes the link to the Unstructured
documentation in the docs.
2024-04-01 15:26:48 -04:00
Brace Sproul
7538c4de19 docs[patch]: Revert quarto update (#19880) 2024-04-01 12:11:27 -07:00
Anıl Berk Altuner
4384fa8e49 community[minor]: Add Dria retriever (#17098)
[Dria](https://dria.co/) is a hub of public RAG models for developers to
both contribute and utilize a shared embedding lake. This PR adds a
retriever that can retrieve documents from Dria.
2024-04-01 12:04:19 -07:00
Erick Friis
0b0a55192f robocorp[patch]: fix core min version (#19879) 2024-04-01 11:34:14 -07:00
Mikko Korpela
3f06cef60c robocorp[patch]: Fix nested arguments descriptors and tool names (#19707)
Thank you for contributing to LangChain!

- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**:
- **Description:** Fix argument translation from OpenAPI spec to OpenAI
function call (and similar)
- **Issue:** OpenGPTs failures with calling Action Server based actions.
    - **Dependencies:** None
    - **Twitter handle:** mikkorpela


- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
~2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.~


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-01 11:29:39 -07:00
Ethan Yang
48f84e253e community[minor]: Add OpenVINO rerank model support (#19791)
@eaidova @AlexKoff88 Could you help to review, thanks

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-01 18:27:23 +00:00
Erick Friis
4fbdc2a7ee openai[patch]: remove openai chunk size validation (#19878) 2024-04-01 18:26:06 +00:00
Chenhui Zhang
a1f3e9f537 community[minor]: Update ChatZhipuAI to support GLM-4 model (#16695)
Description: Update `ChatZhipuAI` to support the latest `glm-4` model.
Issue: N/A
Dependencies: httpx, httpx-sse, PyJWT

The previous `ChatZhipuAI` implementation requires the `zhipuai`
package, and cannot call the latest GLM model. This is because
- The old version `zhipuai==1.*` doesn't support the latest model.
- `zhipuai==2.*` requires `pydantic V2`, which is incompatible with
'langchain-community'.

This re-implementation invokes the GLM model by sending HTTP requests to
[open.bigmodel.cn](https://open.bigmodel.cn/dev/api) via the `httpx`
package, and uses the `httpx-sse` package to handle stream events.

---------

Co-authored-by: zR <2448370773@qq.com>
2024-04-01 18:11:21 +00:00
Bagatur
d25b5b6f25 community[patch]: Release 0.0.31 (#19873) 2024-04-01 10:50:22 -07:00
Erick Friis
e3ed6a7c28 ai21[patch]: fix core dep (#19874) 2024-04-01 10:48:16 -07:00
Nuno Campos
aa5797d908 openai[patch]: Partially Revert Update openai chat model to new base class interface (#19871)
Partially Reverts langchain-ai/langchain#19729

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-01 10:31:06 -07:00
Erick Friis
be92cf57ca openai[patch]: fix azure embedding length check (#19870) 2024-04-01 10:26:15 -07:00
Bagatur
d62e84c4f5 community[patch]: Revert " Fix the bug that Chroma does not specify `e… (#19866)
…mbedding_function` (#19277)"

This reverts commit 7042934b5f.

Fixes #19848
2024-04-01 10:10:44 -07:00
Jacob Lee
f06229bbf1 👥 Update LangChain people data (#19858)
👥 Update LangChain people data

Co-authored-by: github-actions <github-actions@github.com>
2024-04-01 09:57:31 -07:00
Erick Friis
7376e4dbe9 ai21[patch]: release 0.1.3 (#19867) 2024-04-01 09:56:23 -07:00
Ángel Igareta
c2ccf22dfd core: generate mermaid syntax and render visual graph (#19599)
- **Description:** Add functionality to generate Mermaid syntax and
render flowcharts from graph data. This includes support for custom node
colors and edge curve styles, as well as the ability to export the
generated graphs to PNG images using either the Mermaid.INK API or
Pyppeteer for local rendering.
- **Dependencies:** Optional dependencies are `pyppeteer` if rendering
wants to be done using Pypeteer and Javascript code.

---------

Co-authored-by: Angel Igareta <angel.igareta@klarna.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-01 08:14:46 -07:00
Ikko Eltociear Ashimine
8711a05a51 Update cross_encoder_reranker.ipynb (#19846)
HuggingFace -> Hugging Face
2024-04-01 10:49:54 -04:00
Vardhaman
039f314f20 docs: remove unnecessary args from the pip install (#19823)
**Description:** An additional `U` argument was added for the
instructions to install the pip packages for the MediaWiki Dump Document
loader which was leading to error in installing the package. Removing
the argument fixed the command to install.

**Issue:** #19820 
**Dependencies:** No dependency change requierd
**Twitter handle:** [@vardhaman722](https://twitter.com/vardhaman722)
2024-04-01 10:47:26 -04:00
Bagatur
003c98e5b4 experimental[patch]: Release 0.0.56 (#19840) 2024-03-31 22:00:59 -07:00
Bagatur
c4eb841c37 langchain[patch]: Release 0.1.14 (#19839) 2024-03-31 21:44:01 -07:00
Bagatur
0242bce38c community[patch]: Release 0.0.30 (#19838) 2024-03-31 21:26:30 -07:00
Bagatur
08c10bd66a core[patch]: Release 0.1.37 (#19831) 2024-03-31 14:50:39 -07:00
Giannis
8cf1d75d08 cohere[patch]: Fix retriever (#19771)
* Replace `source_documents` with `documents`
* Pass `documents` as a named arg vs keyword
* Make `parsed_docs` more robust
* Fix edge case of doc page_content being `None`
2024-03-31 14:47:03 -07:00
Guangdong Liu
b6ebddbacc langchain[patch]: Upgrade openai's sdk and solve some interface adaptation problems. #19548 (#19785)
- #19548
- @baskaryan @eyurtsev PTAL

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-31 21:35:38 +00:00
Yash Mathur
c42ec58578 together[minor]: Update endpoint to non deprecated version (#19649)
- **Updating Together.ai Endpoint**: "langchain_together: Updated
Deprecated endpoint for partner package"

- Description: The inference API of together is deprecates, do replaced
with completions and made corresponding changes.
- Twitter handle: @dev_yashmathur

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-31 21:21:46 +00:00
hsuyuming
5ab6b39098 community[patch]: add attribution_token within GoogleVertexAISearchRetriever (#18520)
- **Description:** Add attribution_token within
GoogleVertexAISearchRetriever so user can provide this information to
Google support team or product team during debug session.
    
Reference:
https://cloud.google.com/generative-ai-app-builder/docs/view-analytics#user-events

Attribution tokens. Attribution tokens are unique IDs generated by
Vertex AI Search and returned with each search request. Make sure to
include that attribution token as UserEvent.attributionToken with any
user events resulting from a search. This is needed to identify if a
search is served by the API. Only user events with a Google-generated
attribution token are used to compute metrics.
    
    - **Issue:** No
    - **Dependencies:** No
    - **Twitter handle:** abehsu1992626
---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-31 13:54:56 -07:00
Kenneth Choe
f98d7f7494 langchain[minor], community[minor]: add CrossEncoderReranker with HuggingFaceCrossEncoder and SagemakerEndpointCrossEncoder (#13687)
- **Description:** Support reranking based on cross encoder models
available from HuggingFace.
      - Added `CrossEncoder` schema
- Implemented `HuggingFaceCrossEncoder` and
`SagemakerEndpointCrossEncoder`
- Implemented `CrossEncoderReranker` that performs similar functionality
to `CohereRerank`
- Added `cross-encoder-reranker.ipynb` to demonstrate how to use it.
Please let me know if anything else needs to be done to make it visible
on the table-of-contents navigation bar on the left, or on the card list
on [retrievers documentation
page](https://python.langchain.com/docs/integrations/retrievers).
  - **Issue:** N/A
  - **Dependencies:** None other than the existing ones.

---------

Co-authored-by: Kenny Choe <kchoe@amazon.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-31 20:51:31 +00:00
cxumol
3f7da03dd8 docs: fix a dead link (#19814)
**Description**

Google Colab returned 404 when trying to click an "Open In Colab" button
from document. This PR corrected the link.
2024-03-31 10:28:51 -04:00
aditya thomas
b8271bbc4a docs: (minor) updates to voyage ai documentation (#19819)
**Description:** Updates to Voyage AI documentation
**Issue:** Not Applicable
**Dependencies:** None
2024-03-31 10:27:19 -04:00
Tomaz Bratanic
ed49cca191 templates: Update neo4j templates (#19789) 2024-03-30 14:40:05 +00:00
aditya thomas
765d6762bc docs[minor]: include tab info for togetherai (#19796)
**Description:** Included information for the TogetherAI tab
**Issue:** The tab for TogetherAI information was not correct
**Dependencies:** None
2024-03-30 09:23:45 -04:00
LunarECL
b7d180a70d experimental[minor]: Create Closed Captioning Chain for .mp4 videos (#14059)
Description: Video imagery to text (Closed Captioning)
This pull request introduces the VideoCaptioningChain, a tool for
automated video captioning. It processes audio and video to generate
subtitles and closed captions, merging them into a single SRT output.

Issue: https://github.com/langchain-ai/langchain/issues/11770
Dependencies: opencv-python, ffmpeg-python, assemblyai, transformers,
pillow, torch, openai
Tag maintainer:
@baskaryan
@hwchase17


Hello!

We are a group of students from the University of Toronto
(@LunarECL, @TomSadan, @nicoledroi1, @A2113S) that want to make a
contribution to the LangChain community! We have ran make format, make
lint and make test locally before submitting the PR. To our knowledge,
our changes do not introduce any new errors.

Thank you for taking the time to review our PR!

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-30 01:57:53 +00:00
Harrison Chase
56525f2ac1 dont mutate metadata/tags (#19742) 2024-03-29 17:55:27 -07:00
Kamal Zhang
368e35c3b1 community[patch]: introduce convert_to_secret() to bananadev llm (#14283)
- **Description:** Per #12165, this PR add to BananaLLM the function
convert_to_secret_str() during environment variable validation.
- **Issue:** #12165
- **Tag maintainer:** @eyurtsev
- **Twitter handle:** @treewatcha75751

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-30 00:52:25 +00:00
DrKroll
c4da8d0813 langchain[patch]: load ReadFileTool (#14301)
---------

Co-authored-by: Dr. Simon Kroll <krolls@fida.de>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-30 00:46:24 +00:00
anshaneel
0884e5de7f community[minor]: Add Alpha Vantage API Tool (#14332)
### Description
This implementation adds functionality from the AlphaVantage API,
renowned for its comprehensive financial data. The class encapsulates
various methods, each dedicated to fetching specific types of financial
information from the API.

### Implemented Functions

- **`search_symbols`**: 
- Searches the AlphaVantage API for financial symbols using the provided
keywords.

- **`_get_market_news_sentiment`**: 
- Retrieves market news sentiment for a specified stock symbol from the
AlphaVantage API.

- **`_get_time_series_daily`**: 
- Fetches daily time series data for a specific symbol from the
AlphaVantage API.

- **`_get_quote_endpoint`**: 
- Obtains the latest price and volume information for a given symbol
from the AlphaVantage API.

- **`_get_time_series_weekly`**: 
- Gathers weekly time series data for a particular symbol from the
AlphaVantage API.

- **`_get_top_gainers_losers`**: 
- Provides details on top gainers, losers, and most actively traded
tickers in the US market from the AlphaVantage API.

  ### Issue: 
  - #11994 
  
### Dependencies: 
  - 'requests' library for HTTP requests. (import requests)
  - 'pytest' library for testing. (import pytest)

---------

Co-authored-by: Adam Badar <94140103+adam-badar@users.noreply.github.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-30 00:44:01 +00:00
Alex Sherstinsky
a9bc212bf2 community[minor]: fix failing Predibase integration (#19776)
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Langchain-Predibase integration was failing, because
it was not current with the Predibase SDK; in addition, Predibase
integration tests were instantiating the Langchain Community `Predibase`
class with one required argument (`model`) missing. This change updates
the Predibase SDK usage and fixes the integration tests.
    - **Twitter handle:** `@alexsherstinsky`


---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-30 00:38:13 +00:00
ethynic
e9caa22d47 community[patch]: Update minimax.py (#14384)
MiniMaxChat class _generate method shoud return a ChatResult object not
str

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 23:57:06 +00:00
Ahmed Moubtahij
f5d4ce840f langchain[patch]: Simplify ensemble retriever (#14427)
- **Description:** code simplification to improve readability and remove
unnecessary memory allocations.
  - **Tag maintainer**: @baskaryan, @eyurtsev, @hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 16:49:49 -07:00
Snehil Kumar
b36f4147b0 docs: Google Drive Loader always set the env var (#14791)
- **Description:** Code written by following, the official documentation
of [Google Drive
Loader](https://python.langchain.com/docs/integrations/document_loaders/google_drive),
gives errors. I have opened an issue regarding this. See #14725. This is
a pull request for modifying the documentation to use an approach that
makes the code work. Basically, the change is that we need to always set
the GOOGLE_APPLICATION_CREDENTIALS env var to an emtpy string, rather
than only in case of RefreshError. Also, rewrote 2 paragraphs to make
the instructions more clear.
- **Issue:** See this related [issue #
14725](https://github.com/langchain-ai/langchain/issues/14725)
  - **Dependencies:** NA
  - **Tag maintainer:** @baskaryan
  - **Twitter handle:** NA

Co-authored-by: Snehil <snehil@example.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-29 23:19:37 +00:00
M.Abdulrahman Alnaseer
ba54f1577f community[minor]: add support for llmsherpa (#19741)
Thank you for contributing to LangChain!

- [x] **PR title**: "community: added support for llmsherpa library"

- [x] **Add tests and docs**: 
1. Integration test:
'docs/docs/integrations/document_loaders/test_llmsherpa.py'.
2. an example notebook:
`docs/docs/integrations/document_loaders/llmsherpa.ipynb`.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 16:04:57 -07:00
Naveenkhasyap
a99bd098ac docs: fix for #16702 and #16703 (#16705)
- **Description:** Quickstart Documentation updates for missing
dependency installation steps.
- **Issue:** the issue # it prompts users to install required
dependency.
  - **Dependencies:** no,
  - **Twitter handle:** @naveenkashyap_

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 15:57:51 -07:00
Brace Sproul
6d93a03bef docs[patch]: Fix or remove broken mdx links (#19777)
this pr also drops the community added action for checking broken links
in mdx. It does not work well for our use case, throwing errors for
local paths, plus the rest of the errors our in house solution had.
2024-03-29 15:25:08 -07:00
Bagatur
2f5606a318 mistralai[patch]: correct integration_test (#19774) 2024-03-29 21:47:35 +00:00
Pierre Véron
ace7b66261 mistralai[patch]: add missing _combine_llm_outputs implementation in ChatMistralAI (#18603)
# Description
Implementing `_combine_llm_outputs` to `ChatMistralAI` to override the
default implementation in `BaseChatModel` returning `{}`. The
implementation is inspired by the one in `ChatOpenAI` from package
`langchain-openai`.
# Issue
None
# Dependencies
None
# Twitter handle
None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 14:43:20 -07:00
lvliang-intel
0175906437 templates: add RAG template for Intel Xeon Scalable Processors (#18424)
**Description:**
This template utilizes Chroma and TGI (Text Generation Inference) to
execute RAG on the Intel Xeon Scalable Processors. It serves as a
demonstration for users, illustrating the deployment of the RAG service
on the Intel Xeon Scalable Processors and showcasing the resulting
performance enhancements.

**Issue:**
None

**Dependencies:**
The template contains the poetry project requirements to run this
template.
CPU TGI batching is WIP.

**Twitter handle:**
None

---------

Signed-off-by: lvliang-intel <liang1.lv@intel.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 14:37:32 -07:00
Nuno Campos
d4673a3507 openai[patch]: Update openai chat model to new base class interface (#19729) 2024-03-29 14:30:28 -07:00
harry-cohere
23fcc14650 cohere[patch]: support kwargs in with_structured_output (#19736)
**Description:** We'd like to support passing additional kwargs in
`with_structured_output`. I believe this is the accepted approach to
enable additional arguments on API calls.
2024-03-29 14:30:14 -07:00
Brace Sproul
ce0a588ae6 docs[minor]: Add chat model tabs to docs pages (#19589) 2024-03-29 14:23:55 -07:00
BeatrixCohere
bd02b83acd cohere[patch]: Allow overriding of the base URL in Cohere Client (#19766)
This PR adds the ability for a user to override the base API url for the
Cohere client for embeddings and chat llm.
2024-03-29 14:22:30 -07:00
Nisarg Trivedi
1252ccce6f text-splitters[minor]: Added Haskell support in langchain.text_splitter module (#16191)
- **Description:** Haskell language support added in text_splitter
module
  - **Dependencies:** No
  - **Twitter handle:** @nisargtr

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 20:17:50 +00:00
Hrvoje Milković
b7344e3347 community[minor]: Infobip tool integration (#16805)
**Description:** Adding Tool that wraps Infobip API for sending sms or
emails and email validation.
**Dependencies:** None,
**Twitter handle:** @hmilkovic

Implementation:
```
libs/community/langchain_community/utilities/infobip.py
```

Integration tests:
```
libs/community/tests/integration_tests/utilities/test_infobip.py
```

Example notebook:
```
docs/docs/integrations/tools/infobip.ipynb
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 19:01:27 +00:00
Luka Krapic
727a2ea9f1 community[patch]: history size support for DynamoDBChatMessageHistory (#16794)
**Description:** PR adds support for limiting number of messages
preserved in a session history for DynamoDBChatMessageHistory

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 18:56:21 +00:00
Dt22
6dbf1a2de0 community[patch]: fix redis input type for index_schema field (#16874)
### Subject: Fix Type Misdeclaration for index_schema in redis/base.py

I noticed a type misdeclaration for the index_schema column in the
redis/base.py file.

When following the instructions outlined in [Redis Custom Metadata
Indexing](https://python.langchain.com/docs/integrations/vectorstores/redis)
to create our own index_schema, it leads to a Pylance type error. <br/>
**The error message indicates that Dict[str, list[Dict[str, str]]] is
incompatible with the type Optional[Union[Dict[str, str], str,
os.PathLike]].**

```
index_schema = {
    "tag": [{"name": "credit_score"}],
    "text": [{"name": "user"}, {"name": "job"}],
    "numeric": [{"name": "age"}],
}

rds, keys = Redis.from_texts_return_keys(
    texts,
    embeddings,
    metadatas=metadata,
    redis_url="redis://localhost:6379",
    index_name="users_modified",
    index_schema=index_schema,  
)
```
Therefore, I have created this pull request to rectify the type
declaration problem.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 18:55:54 +00:00
morgana
074ad5095f community[patch]: mmr search for Rockset vectorstore integration (#16908)
- **Description:** Adding support for mmr search in the Rockset
vectorstore integration.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** `@_morgan_adams_`

---------

Co-authored-by: Rockset API Bot <admin@rockset.io>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-29 18:45:22 +00:00
shahrin014
f51e6a35ba community[patch]: OllamaEmbeddings - Pass headers to post request (#16880)
## Feature
- Set additional headers in constructor
- Headers will be sent in post request

This feature is useful if deploying Ollama on a cloud service such as
hugging face, which requires authentication tokens to be passed in the
request header.

## Tests
- Test if header is passed
- Test if header is not passed

Similar to https://github.com/langchain-ai/langchain/pull/15881

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 18:44:52 +00:00
Lance Martin
e0f137dbe0 docs: Agentic and Self-RAG w/ LangGraph (#16910)
To do:
[ ] Add streaming
[ ] Move to LangGraph
2024-03-29 11:11:35 -07:00
Jan Chorowski
b8b42ccbc5 community[minor]: Pathway vectorstore(#14859)
- **Description:** Integration with pathway.com data processing pipeline
acting as an always updated vectorstore
  - **Issue:** not applicable
- **Dependencies:** optional dependency on
[`pathway`](https://pypi.org/project/pathway/)
  - **Twitter handle:** pathway_com

The PR provides and integration with `pathway` to provide an easy to use
always updated vector store:

```python
import pathway as pw
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import PathwayVectorClient, PathwayVectorServer

data_sources = []
data_sources.append(
    pw.io.gdrive.read(object_id="17H4YpBOAKQzEJ93xmC2z170l0bP2npMy", service_user_credentials_file="credentials.json", with_metadata=True))

text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
embeddings_model = OpenAIEmbeddings(openai_api_key=os.environ["OPENAI_API_KEY"])
vector_server = PathwayVectorServer(
    *data_sources,
    embedder=embeddings_model,
    splitter=text_splitter,
)
vector_server.run_server(host="127.0.0.1", port="8765", threaded=True, with_cache=False)
client = PathwayVectorClient(
    host="127.0.0.1",
    port="8765",
)
query = "What is Pathway?"
docs = client.similarity_search(query)
```

The `PathwayVectorServer` builds a data processing pipeline which
continusly scans documents in a given source connector (google drive,
s3, ...) and builds a vector store. The `PathwayVectorClient` implements
LangChain's `VectorStore` interface and connects to the server to
retrieve documents.

---------

Co-authored-by: Mateusz Lewandowski <lewymati@users.noreply.github.com>
Co-authored-by: mlewandowski <mlewandowski@MacBook-Pro-mlewandowski.local>
Co-authored-by: Berke <berkecanrizai1@gmail.com>
Co-authored-by: Adrian Kosowski <adrian@pathway.com>
Co-authored-by: mlewandowski <mlewandowski@macbook-pro-mlewandowski.home>
Co-authored-by: berkecanrizai <63911408+berkecanrizai@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: mlewandowski <mlewandowski@MBPmlewandowski.ht.home>
Co-authored-by: Szymon Dudycz <szymond@pathway.com>
Co-authored-by: Szymon Dudycz <szymon.dudycz@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-29 10:50:39 -07:00
ccurme
0dbd5f5012 add script to check imports (#19611) 2024-03-29 13:30:20 -04:00
Arturs Konfino
2319212d54 community[patch]: avoid executing toolkit.get_context() when not necessary (#19762)
If `prompt` is passed into `create_sql_agent()`, then
`toolkit.get_context()` shouldn't be executed against the database
unless relevant prompt variables (`table_info` or `table_names`) are
present .
2024-03-29 16:42:21 +00:00
高璟琦
ec7a59c96c community[minor]: Add solar embedding (#19761)
Solar is a large language model developed by
[Upstage](https://upstage.ai/). It's a powerful and purpose-trained LLM.
You can visit the embedding service provided by Solar within this pr.

You may get **SOLAR_API_KEY** from
https://console.upstage.ai/services/embedding
You can refer to more details about accepted llm integration at
https://python.langchain.com/docs/integrations/llms/solar.
2024-03-29 09:36:05 -07:00
Tomaz Bratanic
dec00d3050 community[patch]: Add the ability to pass maps to neo4j retrieval query (#19758)
Makes it easier to flatten complex values to text, so you don't have to
use a lot of Cypher to do it.
2024-03-29 08:33:48 -07:00
Robby
f7e8a382cc community[minor]: add hugging face text-to-speech inference API (#18880)
Description: I implemented a tool to use Hugging Face text-to-speech
inference API.

Issue: n/a

Dependencies: n/a

Twitter handle: No Twitter, but do have
[LinkedIn](https://www.linkedin.com/in/robby-horvath/) lol.

---------

Co-authored-by: Robby <h0rv@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-29 15:02:29 +00:00
DasDingoCodes
73eb3f8fd9 community[minor]: Implement DirectoryLoader lazy_load function (#19537)
Thank you for contributing to LangChain!

- [x] **PR title**: "community: Implement DirectoryLoader lazy_load
function"

- [x] **Description**: The `lazy_load` function of the `DirectoryLoader`
yields each document separately. If the given `loader_cls` of the
`DirectoryLoader` also implemented `lazy_load`, it will be used to yield
subdocuments of the file.

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access:
`libs/community/tests/unit_tests/document_loaders/test_directory_loader.py`
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory:
`docs/docs/integrations/document_loaders/directory.ipynb`


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-29 14:46:52 +00:00
Christophe Bornet
6b2b511f68 core[minor]: Add aformat_messages to FewShotChatMessagePromptTemplate and ChatPromptTemplate (#19648)
Needed since the example selector may use a vector store.
2024-03-29 10:31:32 -04:00
Leonid Ganeline
5f814820f6 docs: providers pinecone fix (#19737)
Current providers page use link to the old package.
- Fixed installation instructions
- Added a reference to the Pinecone retriever
2024-03-29 08:30:30 -04:00
Bob Lin
53a74ad12b docs: use markdown cell instead of code block (#19740)
I found that the code of async and async batch was divided into two
blocks:

<img width="823" alt="Screenshot 2024-03-29 at 7 45 59 AM"
src="https://github.com/langchain-ai/langchain/assets/10000925/0fa59d29-a692-4309-afb8-2260f03242ec">


so I changed it to unified.
2024-03-29 08:27:48 -04:00
Ekaterina Aidova
4ce36af335 docs: fix link in openvino integration doc (#19749)
- **Description:** fix incorrect link in docs
 - **Dependencies:** None
2024-03-29 12:24:07 +00:00
Jialei
f7c903e24a community[minor]: add support for Moonshot llm and chat model (#17100) 2024-03-29 08:54:23 +00:00
Gustavo Isturiz
824dccf5e2 docs: fixed xml URL on sitemap docs exmaple, issue #17236 (#17304) 2024-03-29 01:36:54 -07:00
Ethan Yang
7164015135 community[minor]: Add Openvino embedding support (#19632)
This PR is used to support both HF and BGE embeddings with openvino

---------

Co-authored-by: Alexander Kozlov <alexander.kozlov@intel.com>
2024-03-29 01:34:51 -07:00
Guangdong Liu
cd55d587c2 langchain[patch]: Upgrade openai's sdk and solve some interface adaptation problems. (#19548)
- **Issue:** close #19534
2024-03-29 01:25:17 -07:00
Kirushikesh DB
12861273e1 experimental[patch]: Removed 'SQLResults:' from the LLMResponse in SQLDatabaseChain (#17104)
**Description:** 
When using the SQLDatabaseChain with Llama2-70b LLM and, SQLite
database. I was getting `Warning: You can only execute one statement at
a time.`.

```
from langchain.sql_database import SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain

sql_database_path = '/dccstor/mmdataretrieval/mm_dataset/swimming_record/rag_data/swimmingdataset.db'
sql_db = get_database(sql_database_path)
db_chain = SQLDatabaseChain.from_llm(mistral, sql_db, verbose=True, callbacks = [callback_obj])
db_chain.invoke({
    "query": "What is the best time of Lance Larson in men's 100 meter butterfly competition?"
})
```
Error:
```
Warning                                   Traceback (most recent call last)
Cell In[31], line 3
      1 import langchain
      2 langchain.debug=False
----> 3 db_chain.invoke({
      4     "query": "What is the best time of Lance Larson in men's 100 meter butterfly competition?"
      5 })

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/langchain/chains/base.py:162, in Chain.invoke(self, input, config, **kwargs)
    160 except BaseException as e:
    161     run_manager.on_chain_error(e)
--> 162     raise e
    163 run_manager.on_chain_end(outputs)
    164 final_outputs: Dict[str, Any] = self.prep_outputs(
    165     inputs, outputs, return_only_outputs
    166 )

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/langchain/chains/base.py:156, in Chain.invoke(self, input, config, **kwargs)
    149 run_manager = callback_manager.on_chain_start(
    150     dumpd(self),
    151     inputs,
    152     name=run_name,
    153 )
    154 try:
    155     outputs = (
--> 156         self._call(inputs, run_manager=run_manager)
    157         if new_arg_supported
    158         else self._call(inputs)
    159     )
    160 except BaseException as e:
    161     run_manager.on_chain_error(e)

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/langchain_experimental/sql/base.py:198, in SQLDatabaseChain._call(self, inputs, run_manager)
    194 except Exception as exc:
    195     # Append intermediate steps to exception, to aid in logging and later
    196     # improvement of few shot prompt seeds
    197     exc.intermediate_steps = intermediate_steps  # type: ignore
--> 198     raise exc

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/langchain_experimental/sql/base.py:143, in SQLDatabaseChain._call(self, inputs, run_manager)
    139     intermediate_steps.append(
    140         sql_cmd
    141     )  # output: sql generation (no checker)
    142     intermediate_steps.append({"sql_cmd": sql_cmd})  # input: sql exec
--> 143     result = self.database.run(sql_cmd)
    144     intermediate_steps.append(str(result))  # output: sql exec
    145 else:

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/langchain_community/utilities/sql_database.py:436, in SQLDatabase.run(self, command, fetch, include_columns)
    425 def run(
    426     self,
    427     command: str,
    428     fetch: Literal["all", "one"] = "all",
    429     include_columns: bool = False,
    430 ) -> str:
    431     """Execute a SQL command and return a string representing the results.
    432 
    433     If the statement returns rows, a string of the results is returned.
    434     If the statement returns no rows, an empty string is returned.
    435     """
--> 436     result = self._execute(command, fetch)
    438     res = [
    439         {
    440             column: truncate_word(value, length=self._max_string_length)
   (...)
    443         for r in result
    444     ]
    446     if not include_columns:

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/langchain_community/utilities/sql_database.py:413, in SQLDatabase._execute(self, command, fetch)
    410     elif self.dialect == "postgresql":  # postgresql
    411         connection.exec_driver_sql("SET search_path TO %s", (self._schema,))
--> 413 cursor = connection.execute(text(command))
    414 if cursor.returns_rows:
    415     if fetch == "all":

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1416, in Connection.execute(self, statement, parameters, execution_options)
   1414     raise exc.ObjectNotExecutableError(statement) from err
   1415 else:
-> 1416     return meth(
   1417         self,
   1418         distilled_parameters,
   1419         execution_options or NO_OPTIONS,
   1420     )

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/sqlalchemy/sql/elements.py:516, in ClauseElement._execute_on_connection(self, connection, distilled_params, execution_options)
    514     if TYPE_CHECKING:
    515         assert isinstance(self, Executable)
--> 516     return connection._execute_clauseelement(
    517         self, distilled_params, execution_options
    518     )
    519 else:
    520     raise exc.ObjectNotExecutableError(self)

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1639, in Connection._execute_clauseelement(self, elem, distilled_parameters, execution_options)
   1627 compiled_cache: Optional[CompiledCacheType] = execution_options.get(
   1628     "compiled_cache", self.engine._compiled_cache
   1629 )
   1631 compiled_sql, extracted_params, cache_hit = elem._compile_w_cache(
   1632     dialect=dialect,
   1633     compiled_cache=compiled_cache,
   (...)
   1637     linting=self.dialect.compiler_linting | compiler.WARN_LINTING,
   1638 )
-> 1639 ret = self._execute_context(
   1640     dialect,
   1641     dialect.execution_ctx_cls._init_compiled,
   1642     compiled_sql,
   1643     distilled_parameters,
   1644     execution_options,
   1645     compiled_sql,
   1646     distilled_parameters,
   1647     elem,
   1648     extracted_params,
   1649     cache_hit=cache_hit,
   1650 )
   1651 if has_events:
   1652     self.dispatch.after_execute(
   1653         self,
   1654         elem,
   (...)
   1658         ret,
   1659     )

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1848, in Connection._execute_context(self, dialect, constructor, statement, parameters, execution_options, *args, **kw)
   1843     return self._exec_insertmany_context(
   1844         dialect,
   1845         context,
   1846     )
   1847 else:
-> 1848     return self._exec_single_context(
   1849         dialect, context, statement, parameters
   1850     )

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1988, in Connection._exec_single_context(self, dialect, context, statement, parameters)
   1985     result = context._setup_result_proxy()
   1987 except BaseException as e:
-> 1988     self._handle_dbapi_exception(
   1989         e, str_statement, effective_parameters, cursor, context
   1990     )
   1992 return result

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/sqlalchemy/engine/base.py:2346, in Connection._handle_dbapi_exception(self, e, statement, parameters, cursor, context, is_sub_exec)
   2344     else:
   2345         assert exc_info[1] is not None
-> 2346         raise exc_info[1].with_traceback(exc_info[2])
   2347 finally:
   2348     del self._reentrant_error

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/sqlalchemy/engine/base.py:1969, in Connection._exec_single_context(self, dialect, context, statement, parameters)
   1967                 break
   1968     if not evt_handled:
-> 1969         self.dialect.do_execute(
   1970             cursor, str_statement, effective_parameters, context
   1971         )
   1973 if self._has_events or self.engine._has_events:
   1974     self.dispatch.after_cursor_execute(
   1975         self,
   1976         cursor,
   (...)
   1980         context.executemany,
   1981     )

File ~/.conda/envs/guardrails1/lib/python3.9/site-packages/sqlalchemy/engine/default.py:922, in DefaultDialect.do_execute(self, cursor, statement, parameters, context)
    921 def do_execute(self, cursor, statement, parameters, context=None):
--> 922     cursor.execute(statement, parameters)

Warning: You can only execute one statement at a time.
```
**Issue:** 
The Error occurs because when generating the SQLQuery, the llm_input
includes the stop character of "\nSQLResult:", so for this user query
the LLM generated response is **SELECT Time FROM men_butterfly_100m
WHERE Swimmer = 'Lance Larson';\nSQLResult:** it is required to remove
the SQLResult suffix on the llm response before executing it on the
database.

```
llm_inputs = {
            "input": input_text,
            "top_k": str(self.top_k),
            "dialect": self.database.dialect,
            "table_info": table_info,
            "stop": ["\nSQLResult:"],
        }

sql_cmd = self.llm_chain.predict(
                callbacks=_run_manager.get_child(),
                **llm_inputs,
            ).strip()

if SQL_RESULT in sql_cmd:
    sql_cmd = sql_cmd.split(SQL_RESULT)[0].strip()
result = self.database.run(sql_cmd)
```


<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-29 01:22:35 -07:00
T Cramer
540ebf35a9 community[patch]: Add explicit error message to Bedrock error output. (#17328)
- **Description:** Propagate Bedrock errors into Langchain explicitly.
Use-case: unset region error is hidden behind 'Could not load
credentials...' message
- **Issue:**
[17654](https://github.com/langchain-ai/langchain/issues/17654)
  - **Dependencies:** None

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-29 03:07:33 +00:00
Marcus Virginia
69bb96c80f community[patch]: surrealdb handle for empty metadata and allow collection names with complex characters (#17374)
- **Description:** Handle for empty metadata and allow collection names
with complex characters
  - **Issue:** #17057
  - **Dependencies:** `surrealdb`

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-29 01:04:27 +00:00
ale-delfino
0df76bee37 core[patch]:: XML parser to cover the case when the xml only contains the root level tag (#17456)
Description: Fix xml parser to handle strings that only contain the root
tag
Issue: N/A
Dependencies: None
Twitter handle: N/A

A valid xml text can contain only the root level tag. Example: <body>
  Some text here
</body>
The example above is a valid xml string. If parsed with the current
implementation the result is {"body": []}. This fix checks if the root
level text contains any non-whitespace character and if that's the case
it returns {root.tag: root.text}. The result is that the above text is
correctly parsed as {"body": "Some text here"}

@ale-delfino

Thank you for contributing to LangChain!

Checklist:

- [x] PR title: Please title your PR "package: description", where
"package" is whichever of langchain, community, core, experimental, etc.
is being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"
- [x] PR message: **Delete this entire template message** and replace it
with the following bulleted list
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [x] Pass lint and test: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified to check that you're
passing lint and testing. See contribution guidelines for more
information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
- [x] Add tests and docs: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @efriis, @eyurtsev, @hwchase17.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-29 00:55:23 +00:00
kYLe
124ab79c23 community[minor]: Add Anyscale embedding support (#17605)
**Description:** Add embedding model support for Anyscale Endpoint
**Dependencies:** openai

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 00:53:53 +00:00
Lance Martin
12843f292f community[patch]: llama cpp embeddings reset default n_batch (#17594)
When testing Nomic embeddings --
```
from langchain_community.embeddings import LlamaCppEmbeddings
embd_model_path = "/Users/rlm/Desktop/Code/llama.cpp/models/nomic-embd/nomic-embed-text-v1.Q4_K_S.gguf"
embd_lc = LlamaCppEmbeddings(model_path=embd_model_path)
embedding_lc = embd_lc.embed_query(query)
```

We were seeing this error for strings > a certain size -- 
```
File ~/miniforge3/envs/llama2/lib/python3.9/site-packages/llama_cpp/llama.py:827, in Llama.embed(self, input, normalize, truncate, return_count)
    824     s_sizes = []
    826 # add to batch
--> 827 self._batch.add_sequence(tokens, len(s_sizes), False)
    828 t_batch += n_tokens
    829 s_sizes.append(n_tokens)

File ~/miniforge3/envs/llama2/lib/python3.9/site-packages/llama_cpp/_internals.py:542, in _LlamaBatch.add_sequence(self, batch, seq_id, logits_all)
    540 self.batch.token[j] = batch[i]
    541 self.batch.pos[j] = i
--> 542 self.batch.seq_id[j][0] = seq_id
    543 self.batch.n_seq_id[j] = 1
    544 self.batch.logits[j] = logits_all

ValueError: NULL pointer access
```

The default `n_batch` of llama-cpp-python's Llama is `512` but we were
explicitly setting it to `8`.
 
These need to be set to equal for embedding models. 
* The embedding.cpp example has an assertion to make sure these are
always equal.
* Apparently this is not being done properly in llama-cpp-python.

With `n_batch` set to 8, if more than 8 tokens are passed the batch runs
out of space and it crashes.

This also explains why the CPU compute buffer size was small:

raw client with default `n_batch=512`
```
llama_new_context_with_model:        CPU input buffer size   =     3.51 MiB
llama_new_context_with_model:        CPU compute buffer size =    21.00 MiB
```
langchain with `n_batch=8`
```
llama_new_context_with_model:        CPU input buffer size   =     0.04 MiB
llama_new_context_with_model:        CPU compute buffer size =     0.33 MiB
```

We can work around this by passing `n_batch=512`, but this will not be
obvious to some users:
```
    embedding = LlamaCppEmbeddings(model_path=embd_model_path,
                                   n_batch=512)
```

From discussion w/ @cebtenzzre. Related:

https://github.com/abetlen/llama-cpp-python/issues/1189

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 00:47:22 +00:00
Zijian Han
8e976545f3 community[patch]: support OpenAI whisper base url (#17695)
**Description:** The base URL for OpenAI is retrieved from the
environment variable "OPENAI_BASE_URL", whereas for langchain it is
obtained from "OPENAI_API_BASE". By adding `base_url =
os.environ.get("OPENAI_API_BASE")`, the OpenAI proxy can execute
correctly.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 00:35:27 +00:00
Paulo Nascimento
44a3484503 community[patch]: add NotebookLoader unit test (#17721)
Thank you for contributing to LangChain!

- **Description:** added unit tests for NotebookLoader. Linked PR:
https://github.com/langchain-ai/langchain/pull/17614
- **Issue:**
[#17614](https://github.com/langchain-ai/langchain/pull/17614)
    - **Twitter handle:** @paulodoestech
- [x] Pass lint and test: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified to check that you're
passing lint and testing. See contribution guidelines for more
information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
- [x] Add tests and docs: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: lachiewalker <lachiewalker1@hotmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 00:27:46 +00:00
Paulo Nascimento
4c3a67122f community[patch]: add Integration for OpenAI image gen with v1 sdk (#17771)
**Description:** Created a Langchain Tool for OpenAI DALLE Image
Generation.
**Issue:**
[#15901](https://github.com/langchain-ai/langchain/issues/15901)
**Dependencies:** n/a
**Twitter handle:** @paulodoestech

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 00:23:14 +00:00
Kaixin Yang
a8104ea8e9 openai[patch]: add checking codes for calling AI model get error (#17909)
**Description:**: adding checking codes for calling AI model get error
in chat_models/base.py and llms/base.py
**Issue**: Sometimes the AI Model calling will get error, we should
raise it.
Otherwise, the next code 'choices.extend(response["choices"])' will
throw a "TypeError: 'NoneType' object is not iterable" error to mask the
true error.
       Because 'response["choices"]' is None.
**Dependencies**: None

---------

Co-authored-by: yangkx <yangkx@asiainfo-int.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-29 00:17:32 +00:00
Vincent Chen
833d61adb3 docs: update Together README.md (#18004)
## PR message
**Description:** This PR adds a README file for the Together API in the
`libs/partners` folder of this repository. The README includes:
 - A brief description of the package
 - Installation instructions and class introductions
 - Simple usage examples

**Issue:** #17545 

This PR only contains document changes.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 00:02:32 +00:00
Jiaming
3d3cc71287 community[patch]: fix bugs for bilibili Loader (#18036)
- **Description:** 
1. Fix the BiliBiliLoader that can receive cookie parameters, it
requires 3 other parameters to run. The change is backward compatible.
  2. Add test;      
  3. Add example in docs

- **Issue:** [#14213]

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-28 16:39:38 -07:00
Ethan Knights
1ef3fa0411 docs: improve readability of Langchain Expression Language get_started.ipynb (#18157)
**Description:** A few grammatical changes to improve readability of the
LCEL .ipynb and tidy some null characters.
**Issue:** N/A

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-28 23:38:30 +00:00
Sachin Paryani
25c9f3d1d1 community[patch]: Support Streaming in Azure Machine Learning (#18246)
- [x] **PR title**: "community: Support streaming in Azure ML and few
naming changes"

- [x] **PR message**:
- **Description:** Added support for streaming for azureml_endpoint.
Also, renamed and AzureMLEndpointApiType.realtime to
AzureMLEndpointApiType.dedicated. Also, added new classes
CustomOpenAIChatContentFormatter and CustomOpenAIContentFormatter and
updated the classes LlamaChatContentFormatter and LlamaContentFormatter
to now show a deprecated warning message when instantiated.

---------

Co-authored-by: Sachin Paryani <saparan@microsoft.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-28 23:38:20 +00:00
xiaohuanshu
ecb11a4a32 langchain[patch]: fix BaseChatMemory get output data error with extra key (#18117)
**Description:** At times, BaseChatMemory._get_input_output may acquire
some extra keys such as 'intermediate_steps' (agent_executor with
return_intermediate_steps set to True) and 'messages'
(agent_executor.iter with memory). In these instances, _get_input_output
can raise an error due to the presence of multiple keys. The 'output'
field should be used as the default field in these cases.
**Issue:** #16791
2024-03-28 16:38:08 -07:00
Isaac Francisco
f5e84c8858 docs: fixing markdown for tips (#18199)
Previous markdown code was not working as intended, new code should add
green box around the tip so it is highlighted

Co-authored-by: Hershenson, Isaac (Extern) <isaac.hershenson.extern@bayer04.de>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-28 23:37:37 +00:00
Hayden Wolff
85deee521a docs: Nvidia Riva Runnables Documentation (#18237)
- **Description:** Documents how to use the Riva runnables to add
streamed automatic-speech-recognition (ASR) and text-to-speech (TTS) to
chains.
  - **Issue:** None
  - **Dependencies:** None
  - **Twitter handle:** @HaydenWolff1

---------

Co-authored-by: Hayden Wolff <hwolff@Haydens-Laptop.local>
Co-authored-by: Hayden Wolff <hwolff@MacBook-Pro.local>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-28 23:35:00 +00:00
Victor Adan
afa2d85405 community[patch]: Added missing from_documents method to KNNRetriever. (#18411)
- Description: Added missing `from_documents` method to `KNNRetriever`,
providing the ability to supply metadata to LangChain `Document`s, and
to give it parity to the other retrievers, which do have
`from_documents`.
- Issue: None
- Dependencies: None
- Twitter handle: None

Co-authored-by: Victor Adan <vadan@netroadshow.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-28 23:18:50 +00:00
Smit Parmar
dfc4177b50 community[patch]: mypy ignore fix (#18483)
Relates to #17048 
Description : Applied fix to dynamodb and elasticsearch file.

Error was : `Cannot override writeable attribute with read-only
property`
Suggestion:
instead of adding 
```
@messages.setter
def messages(self, messages: List[BaseMessage]) -> None:
    raise NotImplementedError("Use add_messages instead")
```

we can change base class property
`messages: List[BaseMessage]`
to
```
@property
def messages(self) -> List[BaseMessage]:...
```

then we don't need to add `@messages.setter` in all child classes.
2024-03-28 15:36:53 -07:00
aditya thomas
dc9e9a66db docs: update docstring of the ChatAnthropic and AnthropicLLM classes (#18649)
**Description:** Update docstring of the ChatAnthropic and AnthropicLLM
classes
**Issue:** Not applicable
**Dependencies:** None
2024-03-28 15:33:54 -07:00
Luca Dorigo
f19229c564 core[patch]: fix beta, deprecated typing (#18877)
**Description:** 

While not technically incorrect, the TypeVar used for the `@beta`
decorator prevented pyright (and thus most vscode users) from correctly
seeing the types of functions/classes decorated with `@beta`.

This is in part due to a small bug in pyright
(https://github.com/microsoft/pyright/issues/7448 ) - however, the
`Type` bound in the typevar `C = TypeVar("C", Type, Callable)` is not
doing anything - classes are `Callables` by default, so by my
understanding binding to `Type` does not actually provide any more
safety - the modified annotation still works correctly for both
functions, properties, and classes.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-28 22:33:43 +00:00
aditya thomas
263ee78886 core[runnables]: docstring for class RunnableSerializable, method configurable_fields (#19722)
**Description:** Update to the docstring for class RunnableSerializable,
method configurable_fields
**Issue:** [Add in code documentation to core Runnable methods
#18804](https://github.com/langchain-ai/langchain/issues/18804)
**Dependencies:** None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-03-28 18:15:18 -04:00
HuangZiy
e1f10a697e openai[patch]: perform judgment processing on chat model streaming delta (#18983)
**PR title:** partners: openai chat model
**PR message:** perform judgment processing on chat model streaming
delta
Closes #18977

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-28 14:46:27 -07:00
wulixuan
b7c8bc8268 community[patch]: fix yuan2 errors in LLMs (#19004)
1. fix yuan2 errors while invoke Yuan2.
2. update tests.
2024-03-28 14:37:44 -07:00
Bob Lin
aba4bd0d13 docs: Add async batch case (#19686) 2024-03-28 14:00:46 -07:00
aditya thomas
ec4dcfca7f core[runnables]: docstring of class RunnableSerializable, method configurable_alternatives (#19724)
**Description:** Update to the docstring for class RunnableSerializable,
method configurable_alternatives
**Issue:** [Add in code documentation to core Runnable methods
#18804](https://github.com/langchain-ai/langchain/issues/18804)
**Dependencies:** None

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-03-28 17:00:08 -04:00
Davide Menini
824dbc49ee langchain[patch]: add template_tool_response arg to create_json_chat (#19696)
In this small PR I added the `template_tool_response` arg to the
`create_json_chat` function, so that users can customize this prompt in
case of need.
Thanks for your reviews!

---------

Co-authored-by: taamedag <Davide.Menini@swisscom.com>
2024-03-28 13:59:54 -07:00
高远
688ca48019 community[patch]: Adding validation when vector does not exist (#19698)
Adding validation when vector does not exist

Co-authored-by: gaoyuan <gaoyuan.20001218@bytedance.com>
2024-03-28 13:58:23 -07:00
Erick Friis
f55b11fb73 infra: Revert run partner CI on core PRs (#19733)
Reverts parts of langchain-ai/langchain#19688
2024-03-28 20:45:59 +00:00
Alessandro Rossi
665f15bd48 docs: fix typos and make quickstart more readable (#19712)
Description: minor docs changes to make it more readable.
Issue: N/A
Dependencies: N/A
Twitter handle: _kubealex
2024-03-28 20:10:32 +00:00
standby24x7
36090c84f2 docs: Update function "run" to "invoke" in llm_symbolic_math.ipynb (#19713)
This patch updates multiple function "run" to "invoke" in
llm_symbolic_math.ipynb.

Without this patch, you see following message.
The function `run` was deprecated in LangChain 0.1.0
 and will be removed in 0.2.0. Use invoke instead.

Signed-off-by: Masanari Iida <standby24x7@gmail.com>
2024-03-28 13:08:22 -07:00
Chaunte W. Lacewell
4a49fc5a95 community[patch]: Fix bug in vdms (#19728)
**Description:** Fix embedding check in vdms
**Contribution maintainer:** [@cwlacewe](https://github.com/cwlacewe)
2024-03-28 12:54:24 -07:00
高璟琦
75173d31db community[minor]: Add solar model chat model (#18556)
Add our solar chat models, available model choices:
* solar-1-mini-chat
* solar-1-mini-translate-enko
* solar-1-mini-translate-koen

More documents and pricing can be found at
https://console.upstage.ai/services/solar.

The references to our solar model can be found at
* https://arxiv.org/abs/2402.17032

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-28 12:31:11 -07:00
Erick Friis
e576d6c6b4 cohere[patch]: release 0.1.0rc1 (rc1-2 never released) (#19731) 2024-03-28 19:12:22 +00:00
harry-cohere
ea57050122 cohere: add with_structured_output to ChatCohere (#19730)
**Description:** Adds support for `with_structured_output` to Cohere,
which supports single function calling.

---------

Co-authored-by: BeatrixCohere <128378696+BeatrixCohere@users.noreply.github.com>
2024-03-28 12:09:25 -07:00
Guangdong Liu
0571f886d1 core[patch]: Fix jsonOutputParser fails if a json value contains ``` inside it. (#19717)
- **Issue:** fix #19646 
- @baskaryan, @eyurtsev PTAL
2024-03-28 12:01:09 -07:00
Davide Menini
f7042321f1 community[patch]: gather token usage info in BedrockChat during generation (#19127)
This PR allows to calculate token usage for prompts and completion
directly in the generation method of BedrockChat. The token usage
details are then returned together with the generations, so that other
downstream tasks can access them easily.

This allows to define a callback for tokens tracking and cost
calculation, similarly to what happens with OpenAI (see
[OpenAICallbackHandler](https://api.python.langchain.com/en/latest/_modules/langchain_community/callbacks/openai_info.html#OpenAICallbackHandler).
I plan on adding a BedrockCallbackHandler later.
Right now keeping track of tokens in the callback is already possible,
but it requires passing the llm, as done here:
https://how.wtf/how-to-count-amazon-bedrock-anthropic-tokens-with-langchain.html.
However, I find the approach of this PR cleaner.

Thanks for your reviews. FYI @baskaryan, @hwchase17

---------

Co-authored-by: taamedag <Davide.Menini@swisscom.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-28 18:58:46 +00:00
ligang-super
a662468dde community[patch]: Fix the error of Baidu Qianfan not passing the stop parameter (#18666)
- [x] **PR title**: "community: fix baidu qianfan missing stop
parameter"
- [x] **PR message**:
- **Description: Baidu Qianfan lost the stop parameter when requesting
service due to extracting it from kwargs. This bug can cause the agent
to receive incorrect results

---------

Co-authored-by: ligang33 <ligang33@baidu.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-28 18:21:49 +00:00
BeatrixCohere
d1a2e194c3 cohere[patch]: misc fixs tool use agent and cohere chat (#19705)
Bug fixes in this PR:
* allows for other params such as "message" not just the input param to
the prompt for the cohere tools agent
* fixes to documents kwarg from messages
* fixes to tool_calls API call

---------

Co-authored-by: Harry M <127103098+harry-cohere@users.noreply.github.com>
2024-03-28 10:19:38 -07:00
ccurme
b35e68c41f docs: update use_cases/question_answering/chat_history (#19349)
Update following https://github.com/langchain-ai/langchain/issues/19344
2024-03-28 12:51:01 -04:00
Erick Friis
8c2ed85a45 core[patch], infra: release 0.1.36, run partner CI on core PRs (#19688) 2024-03-28 08:55:10 -07:00
Erick Friis
5327bc9ec4 elasticsearch[patch]: move to repo (#19620) 2024-03-28 08:54:57 -07:00
Nilanjan De
239dd7c0c0 langchain[patch]: Use map() and avoid "ValueError: max() arg is an empty sequence" in MergerRetriever (#18679)
- **Issue:** When passing an empty list to MergerRetriever it fails with
error: ValueError: max() arg is an empty sequence

- **Description:** We have a use case where we dynamically select
retrievers and use MergerRetriever for merging the output of the
retrievers. We faced this issue when the retriever_docs list is empty.
Adding a default 0 for cases when retriever_docs is an empty list to
avoid "ValueError: max() arg is an empty sequence". Also, changed to use
map() which is more than twice as fast compared to the current
implementation.
```
import timeit
# Sample retriever_docs with varying lengths of sublists
retriever_docs = [[i for i in range(j)] for j in range(1, 1000)]
# First code snippet
code1 = '''
max_docs = max(len(docs) for docs in retriever_docs)
'''
# Second code snippet
code2 = '''
max_docs = max(map(len, retriever_docs), default=0)
'''
# Benchmarking
time1 = timeit.timeit(stmt=code1, globals=globals(), number=10000)
time2 = timeit.timeit(stmt=code2, globals=globals(), number=10000)
# Output
print(f"Execution time for code snippet 1: {time1} seconds")
print(f"Execution time for code snippet 2: {time2} seconds")
```

- **Dependencies:** none
2024-03-27 23:52:57 -07:00
aditya thomas
4cd38fe89f docs: update docstring of the ChatGroq class (#18645)
**Description:** Update docstring of the ChatGroq class
**Issue:** Not applicable
**Dependencies:** None
2024-03-27 23:46:52 -07:00
Jaid
e4d7b1a482 voyageai[patch]: top level reranker import (#19645)
The previous version didn't had  Voyage rerank in the init file


- [ ] **PR title**: langchain_voyageai reranker is not working
 


- [ ] **PR message**: 
    - **Description:** This fix let you run reranker from voyage
    - **Issue:** Was not able to run reranker from voyage
  






 @efriis
2024-03-28 06:37:55 +00:00
Xinwei Xiong
26eed70c11 infra: Optimize Makefile for Better Usability and Maintenance (#18859)
**Previous screenshots:**


![image](https://github.com/langchain-ai/langchain/assets/86140903/e2f326e3-4d97-4b22-aacb-e789a9d815e4)

**Current screenshot:**

![image](https://github.com/langchain-ai/langchain/assets/86140903/bd8a3ea7-1b8a-4803-9168-df45f6fa4893)
2024-03-27 23:37:39 -07:00
Juan Jose Miguel Ovalle Villamil
51baa1b5cf langchain[patch]: fix-cohere-reranker-rerank-method with cohere v5 (#19486)
#### Description
Fixed the following error with `rerank` method from `CohereRerank`:
```
---> [79](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jjmov99/legal-colombia/~/legal-colombia/.venv/lib/python3.11/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py:79) results = self.client.rerank(
     [80](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jjmov99/legal-colombia/~/legal-colombia/.venv/lib/python3.11/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py:80)     query, docs, model, top_n=top_n, max_chunks_per_doc=max_chunks_per_doc
     [81](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jjmov99/legal-colombia/~/legal-colombia/.venv/lib/python3.11/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py:81) )
     [82](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jjmov99/legal-colombia/~/legal-colombia/.venv/lib/python3.11/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py:82) result_dicts = []
     [83](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jjmov99/legal-colombia/~/legal-colombia/.venv/lib/python3.11/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py:83) for res in results.results:

TypeError: BaseCohere.rerank() takes 1 positional argument but 4 positional arguments (and 2 keyword-only arguments) were given
```
This was easily fixed going from this:
```
   def rerank(
        self,
        documents: Sequence[Union[str, Document, dict]],
        query: str,
        *,
        model: Optional[str] = None,
        top_n: Optional[int] = -1,
        max_chunks_per_doc: Optional[int] = None,
    ) -> List[Dict[str, Any]]:
         ...
        if len(documents) == 0:  # to avoid empty api call
            return []
        docs = [
            doc.page_content if isinstance(doc, Document) else doc for doc in documents
        ]
        model = model or self.model
        top_n = top_n if (top_n is None or top_n > 0) else self.top_n
        results = self.client.rerank(
            query, docs, model, top_n=top_n, max_chunks_per_doc=max_chunks_per_doc
        )
        result_dicts = []
        for res in results:
            result_dicts.append(
                {"index": res.index, "relevance_score": res.relevance_score}
            )
        return result_dicts
```
to this:
```
    def rerank(
        self,
        documents: Sequence[Union[str, Document, dict]],
        query: str,
        *,
        model: Optional[str] = None,
        top_n: Optional[int] = -1,
        max_chunks_per_doc: Optional[int] = None,
    ) -> List[Dict[str, Any]]:
         ...
        if len(documents) == 0:  # to avoid empty api call
            return []
        docs = [
            doc.page_content if isinstance(doc, Document) else doc for doc in documents
        ]
        model = model or self.model
        top_n = top_n if (top_n is None or top_n > 0) else self.top_n
        results = self.client.rerank(
            query=query, documents=docs, model=model, top_n=top_n, max_chunks_per_doc=max_chunks_per_doc <-------------
        )
        result_dicts = []
        for res in results.results:  <-------------
            result_dicts.append(
                {"index": res.index, "relevance_score": res.relevance_score}
            )
        return result_dicts
```
#### Unit & Integration tests
I added a unit test to check the behaviour of `rerank`. Also fixed the
original integration test which was failing.

#### Format & Linting
Everything worked properly with `make lint_diff`, `make format_diff` and
`make format`. However I noticed an error coming from other part of the
library when doing `make lint`:

```
(langchain-py3.9) ➜  langchain git:(master) make format
[ "." = "" ] || poetry run ruff format .
1636 files left unchanged
[ "." = "" ] || poetry run ruff --select I --fix .
(langchain-py3.9) ➜  langchain git:(master) make lint
./scripts/check_pydantic.sh .
./scripts/lint_imports.sh
poetry run ruff .
[ "." = "" ] || poetry run ruff format . --diff
1636 files already formatted
[ "." = "" ] || poetry run ruff --select I .
[ "." = "" ] || mkdir -p .mypy_cache && poetry run mypy . --cache-dir .mypy_cache
langchain/agents/openai_assistant/base.py:252: error: Argument "file_ids" to "create" of "Assistants" has incompatible type "Optional[Any]"; expected "Union[list[str], NotGiven]"  [arg-type]
langchain/agents/openai_assistant/base.py:374: error: Argument "file_ids" to "create" of "AsyncAssistants" has incompatible type "Optional[Any]"; expected "Union[list[str], NotGiven]"  [arg-type]
Found 2 errors in 1 file (checked 1634 source files)
make: *** [Makefile:65: lint] Error 1
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-28 06:32:03 +00:00
Shuqian
332996b4b2 openai[patch]: fix ChatOpenAI model's openai proxy (#19559)
Due to changes in the OpenAI SDK, the previous method of setting the
OpenAI proxy in ChatOpenAI no longer works. This PR fixes this issue,
making the previous way of setting the OpenAI proxy in ChatOpenAI
effective again.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-27 23:16:55 -07:00
Bagatur
b15c7fdde6 anthropic[patch]: fix response metadata type (#19683) 2024-03-27 23:16:26 -07:00
kaijietti
9c4b6dc979 community[patch]: fix bug in cohere that async for a coroutine in ChatCohere (#19381)
Without `await`, the `stream` returned from the `async_client` is
actually a coroutine, which could not be used in `async for`.
2024-03-27 21:34:46 -07:00
Christian Galo
1adaa3c662 community[minor]: Update Azure Cognitive Services to Azure AI Services (#19488)
This is a follow up to #18371. These are the changes:
- New **Azure AI Services** toolkit and tools to replace those of
**Azure Cognitive Services**.
- Updated documentation for Microsoft platform.
- The image analysis tool has been rewritten to use the new package
`azure-ai-vision-imageanalysis`, doing a proper replacement of
`azure-ai-vision`.

These changes:
- Update outdated naming from "Azure Cognitive Services" to "Azure AI
Services".
- Update documentation to use non-deprecated methods to create and use
agents.
- Removes need to depend on yanked python package (`azure-ai-vision`)

There is one new dependency that is needed as a replacement to
`azure-ai-vision`:
- `azure-ai-vision-imageanalysis`. This is optional and declared within
a function.

There is a new `azure_ai_services.ipynb` notebook showing usage; Changes
have been linted and formatted.

I am leaving the actions of adding deprecation notices and future
removal of Azure Cognitive Services up to the LangChain team, as I am
not sure what the current practice around this is.

---

If this PR makes it, my handle is  @galo@mastodon.social

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
2024-03-28 03:19:02 +00:00
Shengsheng Huang
ac1dd8ad94 community[minor]: migrate bigdl-llm to ipex-llm (#19518)
- **Description**: `bigdl-llm` library has been renamed to
[`ipex-llm`](https://github.com/intel-analytics/ipex-llm). This PR
migrates the `bigdl-llm` integration to `ipex-llm` .
- **Issue**: N/A. The original PR of `bigdl-llm` is
https://github.com/langchain-ai/langchain/pull/17953
- **Dependencies**: `ipex-llm` library
- **Contribution maintainer**: @shane-huang

Updated doc:   docs/docs/integrations/llms/ipex_llm.ipynb
Updated test:
libs/community/tests/integration_tests/llms/test_ipex_llm.py
2024-03-27 20:12:59 -07:00
Chaunte W. Lacewell
a31f692f4e community[minor]: Add VDMS vectorstore (#19551)
- **Description:** Add support for Intel Lab's [Visual Data Management
System (VDMS)](https://github.com/IntelLabs/vdms) as a vector store
- **Dependencies:** `vdms` library which requires protobuf = "4.24.2".
There is a conflict with dashvector in `langchain` package but conflict
is resolved in `community`.
- **Contribution maintainer:** [@cwlacewe](https://github.com/cwlacewe)
- **Added tests:**
libs/community/tests/integration_tests/vectorstores/test_vdms.py
- **Added docs:** docs/docs/integrations/vectorstores/vdms.ipynb
- **Added cookbook:** cookbook/multi_modal_RAG_vdms.ipynb

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-28 03:12:11 +00:00
William FH
b7b62e29fb community[patch], mongodb[patch]: Stop spamming SIMD import warnings (#19531)
If you use an embedding dist function in an eval loop, you get warned
every time. Would prefer to just check once and forget about it.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-28 03:11:02 +00:00
Tomaz Bratanic
b04e663426 experimental[patch]: Flatten relationships in LLM graph transformer (#19642) 2024-03-27 19:35:34 -07:00
billytrend-cohere
36abb5dd41 cohere[patch]: Fix positional argument (#19678)
cohere: Fix positional argument

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-28 02:26:08 +00:00
Nuno Campos
fdfb51ad8d core: Two updates to chat model interface (#19684)
- .stream() and .astream() call on_llm_new_token, removing the need for
subclasses to do so. Backwards compatible because now we don't pass
run_manager into ._stream and ._astream
- .generate() and .agenerate() now handle `stream: bool` kwarg for
_generate and _agenerate. Subclasses handle this arg by delegating to
._stream(), now one less thing they need to do. Backwards compat because
this is an optional arg that we now never pass to the subclasses
- .generate() and .agenerate() now inspect callback handlers to decide
on a default value for stream:bool if not passed in. This auto enables
streaming when using astream_events and astream_log
- as a result of these three changes any usage of .astream_events and
.astream_log should now yield chat model stream events
- In future PRs we can update all subclasses to reflect these two things
now handled by base class, but in meantime all will continue to work
2024-03-27 18:45:01 -07:00
harry-cohere
3685f8ceac cohere[patch]: Add cohere tools agent (#19602)
**Description**: Adds a cohere tools agent and related notebook.

---------

Co-authored-by: BeatrixCohere <128378696+BeatrixCohere@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-27 18:35:43 -07:00
William FH
5c41f4083e [Evals] Fix function calling support (#19658)
Current implementation is overzealous in validating chat datasets

Fixes
[#langsmith-sdk:557](https://github.com/langchain-ai/langsmith-sdk/issues/557)
2024-03-27 17:23:35 -07:00
yongheng.liu
7e29b6061f community[minor]: integrate China Mobile Ecloud vector search (#15298)
- **Description:** integrate China Mobile Ecloud vector search, 
  - **Dependencies:** elasticsearch==7.10.1

Co-authored-by: liuyongheng <liuyongheng@cmss.chinamobile.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-27 23:02:40 +00:00
Hyeongchan Kim
9b70131aed community[patch]: refactor the type hint of file_path in UnstructuredAPIFileLoader class (#18839)
* **Description**: add `None` type for `file_path` along with `str` and
`List[str]` types.
* `file_path`/`filename` arguments in `get_elements_from_api()` and
`partition()` can be `None`, however, there's no `None` type hint for
`file_path` in `UnstructuredAPIFileLoader` and `UnstructuredFileLoader`
currently.
* calling the function with `file_path=None` is no problem, but my IDE
annoys me lol.
* **Issue**: N/A
* **Dependencies**: N/A

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-27 22:31:54 +00:00
CaroFG
cf96060ab7 community[patch]: update for compatibility with latest Meilisearch version (#18970)
- **Description:** Updates Meilisearch vectorstore for compatibility
with v1.6 and above. Adds embedders settings and embedder_name which are
now required.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-27 22:08:27 +00:00
chyroc
be2adb1083 community[patch]: support unstructured_kwargs for s3 loader (#15473)
fix https://github.com/langchain-ai/langchain/issues/15472

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-27 22:03:48 +00:00
Bagatur
b901649032 docs: move extraction up (#19667) 2024-03-27 14:55:16 -07:00
Kahlil Wehmeyer
9c08cdea92 core[patch]: ToolException docs/exception message (#17590)
**Description:**
This PR adds a slightly more helpful message to a Tool Exception

```
# current state
langchain_core.tools.ToolException: Too many arguments to single-input tool

# proposed state
langchain_core.tools.ToolException: Too many arguments to single-input tool. Consider using a StructuredTool instead.
```
**Issue:** Somewhat discussed here 👉  #6197 
 **Dependencies:** None
**Twitter handle:** N/A

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-27 21:52:36 +00:00
Evgenii Zheltonozhskii
5b1f9c6d3a infra: Consistent lxml requirements (#19520)
Update the dependency for lxml to be consistent among different
packages; should fix
https://github.com/langchain-ai/langchain/issues/19040
2024-03-27 20:27:59 +00:00
Filip Michalsky
2fceec3771 docs: update cookbook example for SalesGPT - include Stripe Payment Link Generation (#19622)
Thank you for contributing to LangChain!

- [ ] **cookbook** - update example for SalesGPT - include Stripe
Payment Link Generation

- **Description:** We updated the Jupyter notebook example with the
ability of the AI Agent to negotiate with customers and then close the
deal by generating a custom Stripe payment link.
    - **Issue:** N/A
    - **Dependencies:** N/a
    - **Twitter handle:** @FilipMichalsky @0xtotaylor


If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Filip Michalsky <filip_michalsky@g.harvard.edu>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-27 20:16:21 +00:00
Christophe Bornet
33fa8cfcd0 core[minor]: Add async methods to MaxMarginalRelevanceExampleSelector (#19639) 2024-03-27 16:03:18 -04:00
Taqi Jaffri
72c8b3127d cli[patch]: Fix typo in dev script name for the --chat-playground option on the cli (#19673)
Fixes typo

---------

Co-authored-by: Taqi Jaffri <tjaffri@docugami.com>
2024-03-27 15:56:11 -04:00
Jan Nissen
2e0ddd6fb8 core[minor]: support pydantic v2 models in PydanticOutputParser (#18811)
As mentioned in #18322, the current PydanticOutputParser won't work for
anyone trying to parse to pydantic v2 models. This PR adds a separate
`PydanticV2OutputParser`, as well as a `langchain_core.pydantic_v2`
namespace that will fail on import to any projects using pydantic<2.
Happy to update the docs for output parsers if this is something we're
interesting in adding.

On a separate note, I also updated `check_pydantic.sh` to detect
pydantic imports with leading whitespace and excluded the internal
namespaces. That change can be separated into its own PR if needed.

---------

Co-authored-by: Jan Nissen <jan23@gmail.com>
2024-03-27 15:37:52 -04:00
Kangmoon Seo
d0accc3275 docs: fix error output in XMLOutputParser documentation (#19569)
- **Description:** I've made a fix to a ParseError call in the
XMLOutputParser documentation.
- **Issue:** None
- **Dependencies:** None

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-27 18:29:00 +00:00
Tomaz Bratanic
87d2a6b777 community[minor]: Add the option to omit schema refresh in Neo4jGraph (#19654) 2024-03-27 14:20:12 -04:00
Bagatur
5fc6531c74 docs: use first_tool_only instead of return_single (#19666) 2024-03-27 18:19:39 +00:00
jhicks2306
bcb8ab5216 docs: Improve docstring for Runnable bind method (#19659)
Added example to the docstring of the "bind" method of Runnable. This
makes it easier to understand the purpose of the method when reviewing
in code editors. E.g. VS Code below.

<img width="833" alt="Screenshot 2024-03-27 at 16 24 18"
src="https://github.com/langchain-ai/langchain/assets/45722942/ad022d4e-7bc0-4f4b-aa7a-838f1816cc52">

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-03-27 14:05:41 -04:00
ccurme
4e9b358ed8 docs: Fix broken imports in documentation (#19655)
Found via script in https://github.com/langchain-ai/langchain/pull/19611
2024-03-27 13:54:05 -04:00
Rajendra Kadam
0019d8a948 community[minor]: Add support for non-file-based Document Loaders in PebbloSafeLoader (#19574)
**Description:**
PebbloSafeLoader: Add support for non-file-based Document Loaders

This pull request enhances PebbloSafeLoader by introducing support for
several non-file-based Document Loaders. With this update,
PebbloSafeLoader now seamlessly integrates with the following loaders:
- GoogleDriveLoader
- SlackDirectoryLoader
- Unstructured EmailLoader

**Issue:** NA
**Dependencies:** - None
**Twitter handle:** @Raj__725

---------

Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
2024-03-27 17:39:52 +00:00
Christophe Bornet
9954c6a38e langchain[minor]: Add async methods to EncoderBackedStore (#19597)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-27 17:36:36 +00:00
Erick Friis
929ed65554 cohere[patch]: release 0.1.0rc1 (#19663) 2024-03-27 17:14:56 +00:00
hulitaitai
dc2c9dd4d7 Update text2vec.py (#19657)
Add that URL of the embedding tool "text2vec".
Fix minor mistakes in the doc-string.
2024-03-27 13:13:30 -04:00
Erick Friis
7630e9529c Revert "community: added partners/package-name folders" (#19662)
Reverts langchain-ai/langchain#19290
2024-03-27 17:09:30 +00:00
Christophe Bornet
409c6eeb0b core: Add async methods to LengthBasedExampleSelector (#19640) 2024-03-27 13:05:58 -04:00
Bagatur
c7f1962f73 core[patch]: Release 0.1.35 (#19660) 2024-03-27 16:54:03 +00:00
Eugene Yurtsev
e8339b1d83 core[patch]: Patch XML vulnerability in XMLOutputParser (CVE-2024-1455) (#19653)
Patch potential XML vulnerability CVE-2024-1455

This patches a potential XML vulnerability in the XMLOutputParser in
langchain-core. The vulnerability in some situations could lead to a
denial of service attack.

At risk are users that:

1) Running older distributions of python that have older version of
libexpat
2) Are using XMLOutputParser with an agent
3) Accept inputs from untrusted sources with this agent (e.g., endpoint
on the web that allows an untrusted user to interact wiith the parser)
2024-03-27 12:41:52 -04:00
Guangdong Liu
7042934b5f community[patch]: Fix the bug that Chroma does not specify embedding_function (#19277)
- **Issue:** close #18291
- @baskaryan, @eyurtsev PTAL
2024-03-27 11:43:38 -04:00
billytrend-cohere
85f57ab4cd cohere[patch]: Fix cohere rerank (#19624)
Fix cohere rerank inspired by
https://github.com/langchain-ai/langchain/pull/19486
2024-03-27 08:41:53 -07:00
Eugene Yurtsev
8ab7bb3166 core[patch]: XMLOutputParser fix to handle changes to xml standard library (#19612)
Newest python micro releases broke streaming in the XMLOutputParser. This fixes the parsing code to work with trailing junk after the XML content.
2024-03-27 09:25:28 -04:00
yuwenzho
3a7d2cf443 community[minor]: Add ITREX optimized Embeddings (#18474)
Introduction
[Intel® Extension for
Transformers](https://github.com/intel/intel-extension-for-transformers)
is an innovative toolkit designed to accelerate GenAI/LLM everywhere
with the optimal performance of Transformer-based models on various
Intel platforms

Description

adding ITREX runtime embeddings using intel-extension-for-transformers.
added mdx documentation and example notebooks
added embedding import testing.

---------

Signed-off-by: yuwenzho <yuwen.zhou@intel.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-27 07:22:06 +00:00
Juan Jose Miguel Ovalle Villamil
1fe10a3e3d experimental[patch]: Enhance LLMGraphTransformer with async processing and improved readability (#19205)
- [x] **PR title**: "experimental: Enhance LLMGraphTransformer with
async processing and improved readability"


- [x] **PR message**: 
- **Description:** This pull request refactors the `process_response`
and `convert_to_graph_documents` methods in the LLMGraphTransformer
class to improve code readability and adds async versions of these
methods for concurrent processing.
    The main changes include:
- Simplifying list comprehensions and conditional logic in the
process_response method for better readability.
- Adding async versions aprocess_response and
aconvert_to_graph_documents to enable concurrent processing of
documents.
These enhancements aim to improve the overall efficiency and
maintainability of the `LLMGraphTransformer` class.
  - **Issue:** N/A
  - **Dependencies:** No additional dependencies required.
  - **Twitter handle:** @jjovalle99


- [x] **Add tests and docs**: N/A (This PR does not introduce a new
integration)


- [x] **Lint and test**: Ran make format, make lint, and make test from
the root of the modified package(s). All tests pass successfully.

Additional notes:

- The changes made in this PR are backwards compatible and do not
introduce any breaking changes.
- The PR touches only the `LLMGraphTransformer` class within the
experimental package.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-26 23:40:21 -07:00
Fabrizio Ruocco
f12cb0bea4 community[patch]: Microsoft Azure Document Intelligence updates (#16932)
- **Description:** Update Azure Document Intelligence implementation by
Microsoft team and RAG cookbook with Azure AI Search

---------

Co-authored-by: Lu Zhang (AI) <luzhan@microsoft.com>
Co-authored-by: Yateng Hong <yatengh@microsoft.com>
Co-authored-by: teethache <hongyateng2006@126.com>
Co-authored-by: Lu Zhang <44625949+luzhang06@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-26 23:36:59 -07:00
Guangdong Liu
cd79305eb9 openai[patch]: fix AzureChatOpenAI missing parameter problem (#19258)
- **Issue:** close #19255
- PTAL @baskaryan @eyurtsev
2024-03-26 22:31:36 -07:00
Leonid Ganeline
3a978a4bdc docs: output_parsers page fix (#19623)
Issue with this
[page](https://python.langchain.com/docs/modules/model_io/output_parsers/):
Table: "Input Type" columns: strings `str \| Message` (the escape char
"\" doesn't work inside backticked text).
2024-03-26 22:17:41 -07:00
Ethan Yang
28cd5522c2 docs: fix typo in openvino document (#19627) 2024-03-26 22:13:54 -07:00
xsai9101
1c27de6ce2 docs: Fix oracle doc loader format issue (#19628) 2024-03-26 22:13:36 -07:00
Timothy
ad77fa15ee community[patch]: Adding try-except block for GCSDirectoryLoader (#19591)
- **Description:** Implemented try-except block for
`GCSDirectoryLoader`. Reason: Users processing large number of
unstructured files in a folder may experience many different errors. A
try-exception block is added to capture these errors. A new argument
`use_try_except=True` is added to enable *silent failure* so that error
caused by processing one file does not break the whole function.
- **Issue:** N/A
- **Dependencies:** no new dependencies
- **Twitter handle:** timothywong731

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-27 00:12:24 +00:00
fzowl
aea2be5bf3 voyageai[patch]: VoyageAI rerank (#19521)
Adding VoyageAI reranking

---------

Co-authored-by: fodizoltan <zoltan@conway.expert>
Co-authored-by: Yujie Qian <thomasq0809@gmail.com>
2024-03-26 17:07:23 -07:00
Leonid Ganeline
4d85485e71 docs: PromptTemplate import from core (#19616)
Changed import of `PromptTemplate` from `langchain` to `langchain_core`
in all examples (notebooks)
2024-03-26 17:03:36 -07:00
Leonid Ganeline
3dc0f3c371 experimental[patch]: PromptTemplate import fix (#19617)
Changed import of `PromptTemplate` from `langchain` to `langchain_core`
in `langchain_experimental`
2024-03-26 17:03:13 -07:00
xsai9101
160a8eb178 community[minor]: add oracle autonomous database doc loader integration (#19536)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Adding oracle autonomous database document loader
integration. This will allow users to connect to oracle autonomous
database through connection string or TNS configuration.
    https://www.oracle.com/autonomous-database/
    - **Issue:** None
    - **Dependencies:** oracledb python package 
    https://pypi.org/project/oracledb/
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
  Unit test and doc are added.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-26 17:02:18 -07:00
Ethan Yang
5784dfed00 docs: update openvino documents (#19543)
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-26 22:15:30 +00:00
Erick Friis
bf8ba00520 cli[patch]: release 0.0.22rc0, chat playground (#19614) 2024-03-26 15:08:56 -07:00
Leonid Ganeline
a3d24bc10b docs: release date fix (#19585)
Replaced the overdue release promise.
2024-03-26 14:51:09 -07:00
Raghav Rawat
b5640a0883 docs: Update apify.ipynb for Document class import (#19598)
- **Description:**
Update to correctly import Document class -
from langchain_core.documents import Document

- **Issue:**
Fixes the notebook and the hosted documentation
[here](https://python.langchain.com/docs/integrations/tools/apify)

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-26 21:46:29 +00:00
jhicks2306
087823aefa docs: Update docstring for MessagesPlaceholder (#19601)
Update to docstring for MessagesPlaceholder so that it shows helpful
information in code editors. E.g. VS Code as shown below.


<img width="587" alt="Screenshot 2024-03-26 at 17 18 58"
src="https://github.com/langchain-ai/langchain/assets/45722942/8f49d09f-ed8d-4f61-a9d4-3611dbe9c9c5">

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-26 14:34:00 -07:00
Christophe Bornet
7c2578bd55 langchain[patch]: Add async methods to EmbeddingRouterChain (#19603) 2024-03-26 14:33:36 -07:00
Christophe Bornet
b3d7b5a653 langchain[patch[: Add async methods to TimeWeightedVectorStoreRetriever (#19606) 2024-03-26 14:03:47 -07:00
Adam Law
aeb7b6b11d community[patch]: use semantic_configurations in AzureSearch (#19347)
- **Description:** Currently the semantic_configurations are not used
when creating an AzureSearch instance, instead creating a new one with
default values. This PR changes the behavior to use the passed
semantic_configurations if it is present, and the existing default
configuration if not.

---------

Co-authored-by: Adam Law <adamlaw@microsoft.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-26 13:57:39 -07:00
Christophe Bornet
a7274f006e langchain[patch]: Add async methods to VectorstoreIndexCreator (#19582) 2024-03-26 13:57:13 -07:00
Bagatur
241774012a core[patch]: Release 0.1.34 (#19609)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-26 13:50:48 -07:00
Nuno Campos
c78eb55859 load: Optionally disable reading secrets from env (#19596)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-03-26 20:32:56 +00:00
Eugene Yurtsev
d3c9974da2 core[patch]: Temporarily disable test for streaming xml parser (#19610)
Test is failing due to micro version bump in python interpreter which
changed something about how std xml parser works
2024-03-26 20:24:20 +00:00
Eugene Yurtsev
8bc5cdccee core[patch]: Reverting changes with defusedXML (#19604)
DefusedXML is causing parsing errors on previously functional code with
the 0.7.x versions. These do not seem to support newer version of python
well. 0.8.x has only been released as rc, so we're not going to to use
it in the core package
2024-03-26 15:13:09 -04:00
Giannis
9ea2a9b0c1 cohere[patch]: Add additional kwargs support for Cohere SDK params (#19533)
* Adds support for `additional_kwargs` in `get_cohere_chat_request`
* This functionality passes in Cohere SDK specific parameters from
`BaseMessage` based classes to the API

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-26 18:30:37 +00:00
Adrian Valente
2763d8cbe5 community: add len() implementation to Chroma (#19419)
Thank you for contributing to LangChain!

- [x] **Add len() implementation to Chroma**: "package: community"


- [x] **PR message**: 
- **Description:** add an implementation of the __len__() method for the
Chroma vectostore, for convenience.
- **Issue:** no exposed method to know the size of a Chroma vectorstore
    - **Dependencies:** None
    - **Twitter handle:** lowrank_adrian


- [x] **Add tests and docs**

- [x] **Lint and test**

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-26 12:53:10 -04:00
Tom Aarsen
e0a1278d2b docs: HFEmbeddings: Add more information to model_kwargs/encode_kwargs (#19594)
- **Description:** Be more explicit with the `model_kwargs` and
`encode_kwargs` for `HuggingFaceEmbeddings`.
    - **Issue:** -
    - **Dependencies:** -

I received some reports by my users that they didn't realise that you
could change the default `batch_size` with `HuggingFaceEmbeddings`,
which may be attributed to how the `model_kwargs` and `encode_kwargs`
don't give much information about what you can specify.

I've added some parameter names & links to the Sentence Transformers
documentation to help clear it up. Let me know if you'd rather have
Markdown/Sphinx-style hyperlinks rather than a "bare URL".

- Tom Aarsen
2024-03-26 12:46:04 -04:00
Dobiichi-Origami
18e6f9376d community[Qianfan]: add function_call in additional_kwargs (#19550)
- **Description:** add lacked `function_call` field in
`additional_kwargs` in previous version
- **Dependencies:** None of new dependency
2024-03-26 12:20:19 -04:00
Eugene Yurtsev
9c7e860cf6 core[patch]: Remove anyio dependency (#19583)
The dependency isn't used anymore
2024-03-26 11:59:22 -04:00
mwmajewsk
f7a1fd91b8 community: better support of pathlib paths in document loaders (#18396)
So this arose from the
https://github.com/langchain-ai/langchain/pull/18397 problem of document
loaders not supporting `pathlib.Path`.

This pull request provides more uniform support for Path as an argument.
The core ideas for this upgrade: 
- if there is a local file path used as an argument, it should be
supported as `pathlib.Path`
- if there are some external calls that may or may not support Pathlib,
the argument is immidiately converted to `str`
- if there `self.file_path` is used in a way that it allows for it to
stay pathlib without conversion, is is only converted for the metadata.

Twitter handle: https://twitter.com/mwmajewsk
2024-03-26 11:51:52 -04:00
Guangdong Liu
94b869a974 github action: Add dead link check for .mdx files (#19492)
- **Description:** Add dead link check for .mdx files. I checked the
logs and found that files with .mdx suffix were not checked.

https://github.com/langchain-ai/langchain/actions/runs/8409525467/job/23026924465#logs
- @baskaryan, @efriis, @eyurtsev, @hwchase17.
2024-03-26 08:42:34 -07:00
Christophe Bornet
6f477e3cb6 docs: Remove chromadb from required dependency in examples with VectorstoreIndexCreator (#19578) 2024-03-26 11:12:21 -04:00
Yuki Watanabe
cfecbda48b community[minor]: Allow passing allow_dangerous_deserialization when loading LLM chain (#18894)
### Issue
Recently, the new `allow_dangerous_deserialization` flag was introduced
for preventing unsafe model deserialization that relies on pickle
without user's notice (#18696). Since then some LLMs like Databricks
requires passing in this flag with true to instantiate the model.

However, this breaks existing functionality to loading such LLMs within
a chain using `load_chain` method, because the underlying loader
function
[load_llm_from_config](f96dd57501/libs/langchain/langchain/chains/loading.py (L40))
 (and load_llm) ignores keyword arguments passed in. 

### Solution
This PR fixes this issue by propagating the
`allow_dangerous_deserialization` argument to the class loader iff the
LLM class has that field.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-26 11:07:55 -04:00
hulitaitai
d7c14cb6f9 community[minor]: Add embeddings integration for text2vec (#19267)
Create a Class which allows to use the "text2vec" open source embedding
model.

It should install the model by running 'pip install -U text2vec'.
Example to call the model through LangChain:

from langchain_community.embeddings.text2vec import Text2vecEmbeddings

            embedding = Text2vecEmbeddings()
            bookend.embed_documents([
                "This is a CoSENT(Cosine Sentence) model.",
"It maps sentences to a 768 dimensional dense vector space.",
            ])
            bookend.embed_query(
                "It can be used for text matching or semantic search."
            )

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-26 11:06:58 -04:00
Shotaro Sano
55c624a694 infra: Resolve the endless dependency resolution during the build of dev.Dockerfile by copying poetry.lock (#19465)
## Description
This PR proposes a modification to the `libs/langchain/dev.Dockerfile`
configuration to copy the `libs/langchain/poetry.lock` into the working
directory. The change aims to address the issue where the Poetry install
command, the last command in the `dev.Dockerfile`, takes excessively
long hours, and to ensure the reproducibility of the poetry environment
in the devcontainer.

## Problem
The `dev.Dockerfile`, prepared for development environments such as
`.devcontainer`, encounters an unending dependency resolution when
attempting the Poetry installation.

### Steps to Reproduce
Execute the following build command: 

```bash
docker build -f libs/langchain/dev.Dockerfile .
```

### Current Behavior
The Docker build process gets stuck at the following step, which, in my
experience, did not conclude even after an entire night:

```
 => [langchain-dev-dependencies 4/6] COPY libs/community/ ../community/                                                                                0.9s
 => [langchain-dev-dependencies 5/6] COPY libs/text-splitters/ ../text-splitters/                                                                      0.0s
 => [langchain-dev-dependencies 6/6] RUN poetry install --no-interaction --no-ansi --with dev,test,docs                                               12.3s
 => => # Updating dependencies                                                                                                                             
 => => # Resolving dependencies...  
```

### Expected Behavior
The Docker build completes in a realistic timeframe. By applying this
PR, the build finishes within a few minutes.

### Analysis
The complexity of LangChain's dependencies has reached a point where
Poetry is required to resolve dependencies akin to threading a needle.
Consequently, poetry install fails to complete in a practical timeframe.

## Solution
The solution for dependency resolution is already recorded in
`libs/langchain/poetry.lock`, so we can use it. When copying
`project.toml` and `poetry.toml`, the `poetry.lock` located in the same
directory should also be copied.

```diff
# Copy only the dependency files for installation
-COPY libs/langchain/pyproject.toml libs/langchain/poetry.toml ./
+COPY libs/langchain/pyproject.toml libs/langchain/poetry.toml libs/langchain/poetry.lock ./
```

## Note
I am not intimately familiar with the historical context of the
`dev.Dockerfile` and thus do not know why `poetry.lock` has not been
copied until now. It might have been an oversight, or perhaps dependency
resolution used to complete quickly even without the `poetry.lock` file
in the past. However, if there are deliberate reasons why copying
`poetry.lock` is not advisable, please just close this PR.
2024-03-26 10:54:53 -04:00
Kalyan Mudumby
d27600c6f7 community[patch]: GPTCache pydantic validation error on lookup (#19427)
Description:
this change fixes the pydantic validation error when looking up from
GPTCache, the `ChatOpenAI` class returns `ChatGeneration` as response
which is not handled.
use the existing `_loads_generations` and `_dumps_generations` functions
to handle it

Trace
```
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/development/scripts/chatbot-postgres-test.py", line 90, in <module>
    print(llm.invoke("tell me a joke"))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 166, in invoke
    self.generate_prompt(
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 544, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 408, in generate
    raise e
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 398, in generate
    self._generate_with_cache(
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 585, in _generate_with_cache
    cache_val = llm_cache.lookup(prompt, llm_string)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_community/cache.py", line 807, in lookup
    return [
           ^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_community/cache.py", line 808, in <listcomp>
    Generation(**generation_dict) for generation_dict in json.loads(res)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 120, in __init__
    super().__init__(**kwargs)
  File "/home/theinhumaneme/Documents/NebuLogic/conversation-bot/venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Generation
type
  unexpected value; permitted: 'Generation' (type=value_error.const; given=ChatGeneration; permitted=('Generation',))
```


Although I don't seem to find any issues here, here's an
[issue](https://github.com/zilliztech/GPTCache/issues/585) raised in
GPTCache. Please let me know if I need to do anything else

Thank you

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-26 10:52:30 -04:00
Leonid Ganeline
4159a4723c experimental[patch]: update module doc strings (#19539)
Added missed module descriptions. Fixed format.
2024-03-26 10:38:10 -04:00
Piyush Jain
72ba738bf5 community[minor]: Improvements for NeptuneRdfGraph, Improve discovery of graph schema using database statistics (#19546)
Fixes linting for PR
[19244](https://github.com/langchain-ai/langchain/pull/19244)

---------

Co-authored-by: mhavey <mchavey@gmail.com>
2024-03-26 10:36:51 -04:00
aditya thomas
fc6b92bb9a docs: add cohere to the list of partners (#19552)
**Description:** Add Cohere to the list of LangChain partners
**Issue:** The Cohere partner package was recently added
[#19049](https://github.com/langchain-ai/langchain/pull/19049)
**Dependencies:** None
2024-03-26 10:22:03 -04:00
Christophe Bornet
1f422318b7 core[minor]: Use BaseChatMessageHistory async methods in RunnableWithMessageHistory (#19565)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-26 14:13:58 +00:00
Christophe Bornet
8595c3ab59 community[minor]: Add InMemoryVectorStore to module level imports (#19576) 2024-03-26 14:07:44 +00:00
Christophe Bornet
a9457d269e core: Add async methods to BaseExampleSelector and SemanticSimilarityExampleSelector (#19399)
Few-Shot prompt template may use a `SemanticSimilarityExampleSelector`
that in turn uses a `VectorStore` that does I/O operations.
So to work correctly on the event loop, we need:
* async methods for the `VectorStore` (OK)
* async methods for the `SemanticSimilarityExampleSelector` (this PR)
* async methods for `BasePromptTemplate` and `BaseChatPromptTemplate`
(future work)
2024-03-26 10:06:43 -04:00
Christophe Bornet
29c58528c7 core[minor]: Add default implementations to amax_marginal_relevance_search_by_vector and adelete (#19269) 2024-03-26 10:03:22 -04:00
Christophe Bornet
999365186b langchain[major]: Use InMemoryVectorStore by default in VectorstoreIndexCreator (#19575)
This is a small breaking change but I think it should be done as:
* No external dependency needs to be installed anymore for the default
to work
* It is vendor-neutral
2024-03-26 10:01:23 -04:00
standby24x7
16e64d889a docs: Update function "run" to "invoke" in fake_llm.ipynb (#19570)
This patch updates function "run" to "invoke" in fake_llm.ipynb. Without
this patch, you see following warning.

LangChainDeprecationWarning: The function `run` was deprecated in
LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.

Signed-off-by: Masanari Iida <standby24x7@gmail.com>
2024-03-26 09:54:31 -04:00
Guangdong Liu
c93d4ea91c docs: Add in code documentation to core Runnable map methods (docs only) (#19517)
- **Issue:** #18804
- @baskaryan, @eyurtsev
2024-03-25 19:18:30 -07:00
Leonid Ganeline
0199b73188 docs: added partners/package-name folders (#19290)
Added references to new integration packages from Google, by adding
subfolders to `partners/`.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-26 02:16:59 +00:00
Aayush Kataria
03c38005cb community[patch]: Fixing some caching issues for AzureCosmosDBSemanticCache (#18884)
Fixing some issues for AzureCosmosDBSemanticCache
- Added the entry for "AzureCosmosDBSemanticCache" which was missing in
langchain/cache.py
- Added application name when creating the MongoClient for the
AzureCosmosDBVectorSearch, for tracking purposes.

@baskaryan, can you please review this PR, we need this to go in asap.
These are just small fixes which we found today in our testing.
2024-03-25 19:06:17 -07:00
Clément Tamines
a6cbb755a7 community[patch]: fix semantic answer bug in AzureSearch vector store (#18938)
- **Description:** The `semantic_hybrid_search_with_score_and_rerank`
method of `AzureSearch` contains a hardcoded field name "metadata" for
the document metadata in the Azure AI Search Index. Adding such a field
is optional when creating an Azure AI Search Index, as other snippets
from `AzureSearch` test for the existence of this field before trying to
access it. Furthermore, the metadata field name shouldn't be hardcoded
as "metadata" and use the `FIELDS_METADATA` variable that defines this
field name instead. In the current implementation, any index without a
metadata field named "metadata" will yield an error if a semantic answer
is returned by the search in
`semantic_hybrid_search_with_score_and_rerank`.

- **Issue:** https://github.com/langchain-ai/langchain/issues/18731

- **Prior fix to this bug:** This bug was fixed in this PR
https://github.com/langchain-ai/langchain/pull/15642 by adding a check
for the existence of the metadata field named `FIELDS_METADATA` and
retrieving a value for the key called "key" in that metadata if it
exists. If the field named `FIELDS_METADATA` was not present, an empty
string was returned. This fix was removed in this PR
https://github.com/langchain-ai/langchain/pull/15659 (see
ed1ffca911#).
@lz-chen: could you confirm this wasn't intentional? 

- **New fix to this bug:** I believe there was an oversight in the logic
of the fix from
[#1564](https://github.com/langchain-ai/langchain/pull/15642) which I
explain below.
The `semantic_hybrid_search_with_score_and_rerank` method creates a
dictionary `semantic_answers_dict` with semantic answers returned by the
search as follows.

5c2f7e6b2b/libs/community/langchain_community/vectorstores/azuresearch.py (L574-L581)
The keys in this dictionary are the unique document ids in the index, if
I understand the [documentation of semantic
answers](https://learn.microsoft.com/en-us/azure/search/semantic-answers)
in Azure AI Search correctly. When the method transforms a search result
into a `Document` object, an "answer" key is added to the document's
metadata. The value for this "answer" key should be the semantic answer
returned by the search from this document, if such an answer is
returned. The match between a `Document` object and the semantic answers
returned by the search should be done through the unique document id,
which is used as a key for the `semantic_answers_dict` dictionary. This
id is defined in the search result's field named `FIELDS_ID`. I added a
check to avoid any error in case no field named `FIELDS_ID` exists in a
search result (which shouldn't happen in theory).
A benefit of this approach is that this fix should work whether or not
the Azure AI Search Index contains a metadata field.

@levalencia could you confirm my analysis and test the fix?
@raunakshrivastava7 do you agree with the fix?

Thanks for the help!
2024-03-25 18:51:54 -07:00
miri-bar
55db737302 ai21[minor]: AI21 Labs Semantic Text Splitter support (#19510)
Description: Added support for AI21 Labs model - Segmentation, as a Text
Splitter
Dependencies: ai21, langchain-text-splitter
Twitter handle: https://github.com/AI21Labs

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-26 01:39:37 +00:00
Anindyadeep
b2a11ce686 community[minor]: Prem AI langchain integration (#19113)
### Prem SDK integration in LangChain

This PR adds the integration with [PremAI's](https://www.premai.io/)
prem-sdk with langchain. User can now access to deployed models
(llms/embeddings) and use it with langchain's ecosystem. This PR adds
the following:

### This PR adds the following:

- [x]  Add chat support
- [X]  Adding embedding support
- [X]  writing integration tests
    - [X]  writing tests for chat 
    - [X]  writing tests for embedding
- [X]  writing unit tests
    - [X]  writing tests for chat 
    - [X]  writing tests for embedding
- [X]  Adding documentation
    - [X]  writing documentation for chat
    - [X]  writing documentation for embedding
- [X] run `make test`
- [X] run `make lint`, `make lint_diff` 
- [X]  Final checks (spell check, lint, format and overall testing)

---------

Co-authored-by: Anindyadeep Sannigrahi <anindyadeepsannigrahi@Anindyadeeps-MacBook-Pro.local>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-26 01:37:19 +00:00
Alessandro D'Armiento
37eb3a4a9e docs: Some import nits (#19130)
- **Description:** fixes some minor issues in the documentation

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-26 01:25:44 +00:00
Souhail Hanfi
cbec43afa9 community[patch]: avoid creating extension PGvector while using readOnly Databases (#19268)
- **Description:** PgVector class always runs "create extension" on init
and this statement crashes on ReadOnly databases (read only replicas).
but wierdly the next create collection etc work even in readOnly
databases
- **Dependencies:** no new dependencies
- **Twitter handle:** @VenOmaX666

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-26 01:25:01 +00:00
Dixing (Dex) Xu
903541f439 docs: update dependecy for autogpt/marathon.ipynb (#19491)
fixes the import error from notebook based on the
[documentation](https://api.python.langchain.com/en/latest/agents/langchain_experimental.agents.agent_toolkits.pandas.base.create_pandas_dataframe_agent.html)

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-25 18:14:22 -07:00
Mauricio Cruz
fb9ce95184 cli[patch]: Fix Tuple typing problem when create new langchain app (#19141)
Thank you for contributing to LangChain!

When run command langchain app new my-app, i get this error:

File
"/home/mauricio/.local/lib/python3.8/site-packages/langchain_cli/utils/pyproject.py",
line 15, in <module>
pyproject_toml: Path, local_editable_dependencies: Iterable[tuple[str,
Path]]
TypeError: 'type' object is not subscriptable

This PR fix the error.
2024-03-26 01:09:51 +00:00
Anthony Shaw
6c9b0f96f3 docs: Add guidance for splitting Chinese, Japanese, and Thai (#19295)
The existing default list of separators for the `RecursiveTextSplitter`
assumes spaces are word boundaries. Some languages [don't use spaces
between
words](https://en.wikipedia.org/wiki/Category:Writing_systems_without_word_boundaries)
(Chinese, Japanese, Thai, Burmese).

This PR extends the documentation to explain how to cater for those
languages by adding additional punctuation to the separators and
zero-width spaces which are used by some typesetters and will assist the
splitter to not split in words.

Ideally, **these separators could be a constant in the module** but for
now, defining them in the documentation is a start.
2024-03-26 00:34:00 +00:00
Erick Friis
441a8012b3 mistralai[patch]: release 0.1.0 (#19540) 2024-03-25 17:29:40 -07:00
Barun Amalkumar Halder
9246ec6b36 community[patch] : [Fiddler] ensure dataset is not added if model is present (#19293)
**Description:**
- minor PR to speed up onboarding by not trying to add a dataset, if a
model is already present.
- replace batch publish API with streaming when single events are
published.

**Dependencies:** any dependencies required for this change
**Twitter handle:** behalder

Co-authored-by: Barun Halder <barun@fiddler.ai>
2024-03-25 17:28:05 -07:00
JSDu
6e090280fd community[patch]: milvus will autoflush, manual flush is slowly (#19300)
reference:


https://milvus.io/docs/configure_quota_limits.md#quotaAndLimitsflushRateenabled

https://github.com/milvus-io/milvus/issues/31407

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-26 00:26:58 +00:00
mackong
e65dc4b95b community[patch]: clean warning when delete by ids (#19301)
* Description: rearrange to avoid variable overwrite, which cause
warning always.
* Issue: N/A
* Dependencies: N/A
2024-03-25 17:23:22 -07:00
Ian
d5415dbd68 docs: improve tidb integrations documents (#19321)
This PR aims to enhance the documentation for TiDB integration, driven
by feedback from our users. It provides detailed introductions to key
features, ensuring developers can fully leverage TiDB for AI application
development.
2024-03-25 17:08:23 -07:00
Stefano Mosconi
01fc69c191 community[patch]: expanding version in confluence loader (#19324)
**Description:**
Expanding version in all the Confluence API calls so to get when the
page was last modified/created in all cases.

**Issue:** #12812 
**Twitter handle:** zzste
2024-03-25 17:08:01 -07:00
Dmitry Tyumentsev
08b769d539 community[patch]: YandexGPT Use recent yandexcloud sdk version (#19341)
Fixed inability to work with [yandexcloud
SDK](https://pypi.org/project/yandexcloud/) version higher 0.265.0
2024-03-25 17:05:57 -07:00
Marlene
f1313339ac community[patch]: Fixing incorrect base URLs for Azure Cognitive Search Retriever (#19352)
This PR adds code to make sure that the correct base URL is being
created for the Azure Cognitive Search retriever. At the moment an
incorrect base URL is being generated. I think this is happening because
the original code was based on a depreciated API version. No
dependencies need to be added. I've also added more context to the test
doc strings.

I should also note that ACS is now Azure AI Search. I will open a
separate PR to make these changes as that would be a breaking change and
should potentially be discussed.

Twitter: @marlene_zw



- No new tests added, however the current ACS retriever tests are now
passing when I run them.
- Code was linted.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-26 00:04:59 +00:00
Tridib Roy Arjo
d667b1ea8f docs: Update async_chromium.ipynb (#19514)
In Jupyter, asyncio would throw an error before `.load()` unless
`nest_asyncio` is applied (Issue #8494 mentioned this)

+Minor typo fixes..
2024-03-26 00:02:50 +00:00
Bob Lin
5b6b1f9e1d docs: Fix several sample code errors (#19382) 2024-03-25 16:59:52 -07:00
FinTech秋田
03ba1d4731 community[patch]: Add Support for GPU Index Types in Milvus 2.4 (#19468)
- **Description:** This commit introduces support for the newly
available GPU index types introduced in Milvus 2.4 within the LangChain
project's `milvus.py`. With the release of Milvus 2.4, a range of
GPU-accelerated index types have been added, offering enhanced search
capabilities and performance optimizations for vector search operations.
This update ensures LangChain users can fully utilize the new
performance benefits for vector search operations.
    - Reference: https://milvus.io/docs/gpu_index.md

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-25 23:39:54 +00:00
Hamid Ali
c281ec8887 docs: Fix broken link in semantic-chunker.ipynb (#19464)
Corrected a broken link within the semantic-chunker.ipynb notebook,
ensuring that users can access the referenced resource.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-25 23:39:32 +00:00
Ash Vardanian
d01bad5169 core[patch]: Convert SimSIMD back to NumPy (#19473)
This patch fixes the #18022 issue, converting the SimSIMD internal
zero-copy outputs to NumPy.

I've also noticed, that oftentimes `dtype=np.float32` conversion is used
before passing to SimSIMD. Which numeric types do LangChain users
generally care about? We support `float64`, `float32`, `float16`, and
`int8` for cosine distances and `float16` seems reasonable for
practically any kind of embeddings and any modern piece of hardware, so
we can change that part as well 🤗
2024-03-25 16:36:26 -07:00
Ikko Eltociear Ashimine
980658cb47 docs: Update streaming.ipynb (#19500)
Fixed typo.

occuring -> occurring
2024-03-25 16:21:45 -07:00
Leonid Kuligin
91f4c80143 docs: fixed links (#19503)
- [ ] **PR title**: "docs: fixed broken links"


- [ ] **PR message**:
    - **Description:** fixed links in the documentation
2024-03-25 16:19:28 -07:00
Mikelarg
dac2e0165a community[minor]: Added GigaChat Embeddings support + updated previous GigaChat integration (#19516)
- **Description:** Added integration with
[GigaChat](https://developers.sber.ru/portal/products/gigachat)
embeddings. Also added support for extra fields in GigaChat LLM and
fixed docs.
2024-03-25 16:08:37 -07:00
Martin Kolb
e5bdb26f76 community[patch]: More flexible handling for entity names in vector store "HANA Cloud" (#19523)
- **Description:** Added support for lower-case and mixed-case names
The names for tables and columns previouly had to be UPPER_CASE.
With this enhancement, also lower_case and MixedCase are supported,


  - **Issue:** N/A
  - **Dependencies:** no new dependecies added
  - **Twitter handle:** @sapopensource
2024-03-25 15:52:45 -07:00
Erica Clark
a1ff21f90f docs: Update local llms article to use invoke instead of deprecated __call__ (#19528)
- **Description:** Since the implicit `__call__` has been deprecated in
favor of `invoke`, the local_llms article also needed to be updated.
This article was my introduction to Lanchain, and as it was helpful in
getting me setup with running LLMs locally, it is nice to not have any
warnings when running the example code. With this change, the warnings
go away when running the example code.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** clarkerican
2024-03-25 15:51:39 -07:00
Orest Xherija
0b1e09029f openai[patch]: increase max batch size for Azure OpenAI Embeddings API (#19532)
**Description:** Azure OpenAI has increased its maximum batch size from
16 to 2048 for the Embeddings API per this How-To
[page](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/embeddings?tabs=console#best-practices)

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-25 15:50:07 -07:00
Eugene Yurtsev
56f4c5459b core[patch]: fix xml output parser transform (#19530)
Previous PR passed _parser attribute which apparently is not meant to be
used by user code and causes non deterministic failures on CI when
testing the transform and a transform methods. Reverting this change
temporarily.
2024-03-25 21:34:45 +00:00
Erick Friis
e6952b04d5 cohere[patch]: fix release (#19529) 2024-03-25 13:46:29 -07:00
aditya thomas
aa68fd7e91 core[runnables]: docstring for class runnable, method with_listeners() (#19515)
**Description:** Docstring for method with_listerners() of class
Runnable
**Issue:** [Add in code documentation to core Runnable methods
#18804](https://github.com/langchain-ai/langchain/issues/18804)
**Dependencies:** None
2024-03-25 16:24:58 -04:00
billytrend-cohere
63343b4987 cohere[patch]: add cohere as a partner package (#19049)
Description: adds support for langchain_cohere

---------

Co-authored-by: Harry M <127103098+harry-cohere@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-25 20:23:47 +00:00
Eugene Yurtsev
727d5023ce core[patch]: Use defusedxml in XMLOutputParser (#19526)
This mitigates a security concern for users still using older versions of libexpat that causes an attacker to compromise the availability of the system if an attacker manages to surface malicious payload to this XMLParser.
2024-03-25 16:21:52 -04:00
Zachary Wilkins
e1a6341940 langchain: Passthrough batch_size on index()/aindex() calls (#19443)
**Description:** This change passes through `batch_size` to
`add_documents()`/`aadd_documents()` on calls to `index()` and
`aindex()` such that the documents are processed in the expected batch
size.
**Issue:** #19415
**Dependencies:** N/A
**Twitter handle:** N/A
2024-03-25 11:58:29 -04:00
ccurme
82de8fd6c9 add kwargs (#19519)
`HanaDB.add_texts` is missing **kwargs.
2024-03-25 11:56:01 -04:00
Nikhil Kumar
3d3b46a782 docs: Update docs for HuggingFacePipeline (#19306)
Updated `HuggingFacePipeline` docs to be in sync with list of supported
tasks, including translation.

- [x] **PR title**: "community: Update docs for `HuggingFacePipeline`"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [x] **PR message**:
- **Description:** Update docs for `HuggingFacePipeline`, was earlier
missing `translation` as a valid task
    - **Issue:** N/A
    - **Dependencies:** N/A
    - **Twitter handle:** None


- [x] **Add tests and docs**:


- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-03-25 00:29:21 -07:00
Igor Muniz Soares
743f888580 community[minor]: Dappier chat model integration (#19370)
**Description:** 

This PR adds [Dappier](https://dappier.com/) for the chat model. It
supports generate, async generate, and batch functionalities. We added
unit and integration tests as well as a notebook with more details about
our chat model.


**Dependencies:** 
    No extra dependencies are needed.
2024-03-25 07:29:05 +00:00
Jacob Lezberg
64e1df3d3a infra: Update package version to apply CVE-related patch (#19490)
- **Description:** [CVE
2024-21503](https://www.cve.org/CVERecord?id=CVE-2024-21503) was
recently identified. The python linter "black" suffers from a potential
Regex-related denial of service attack. Updated version from the
vulnerable 24.2.0 to the patched 24.3.0.
- **Issue:** N/A
- **Dependencies:** The 'black' package in both `langchain` (top-level)
and `templates/python-lint`.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-25 07:11:23 +00:00
Hugoberry
96dc180883 community[minor]: Add DuckDB as a vectorstore (#18916)
DuckDB has a cosine similarity function along list and array data types,
which can be used as a vector store.
- **Description:** The latest version of DuckDB features a cosine
similarity function, which can be used with its support for list or
array column types. This PR surfaces this functionality to langchain.
    - **Dependencies:** duckdb 0.10.0
    - **Twitter handle:** @igocrite

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-25 07:02:35 +00:00
Ethan Yang
fa6397d76a docs: Add OpenVINO llms docs (#19489)
Add OpenVINOpipeline instructions in docs. OpenVINO users can find more
details in this page.
2024-03-24 23:57:30 -07:00
preak95
6ea3e57a63 community[minor]: S3FileLoader to use expose mode and post_processors arguments of unstructured loader (#19270)
**Description:** Update s3_file.py to use arguments **mode** and
**post_processors** from the base class **UnstructuredBaseLoader** to
include more metadata about the files from the S3 bucket such as
*'page_number', 'languages'* etc.

**Issue:** NA
**Dependencies:** None
**Twitter handle:** preak95

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-25 06:56:55 +00:00
Guangdong Liu
560e2182d8 docs: docstring Runnable pipe and pick methods (docs only) (#19395)
- **Issue:**  #18804
-  @eyurtsev @ccurme PTAL

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-24 23:50:04 -07:00
Christophe Bornet
63898dbda0 langchain[patch]: Use async memory in Chain when needed (#19429) 2024-03-24 23:49:00 -07:00
Lance Martin
db7403d667 docs: Remove non-rendering images & output spamming from doc ntbks (#19475)
Looking at tokens / page of our docs, we see a few outliers:
<img width="761" alt="image"
src="https://github.com/langchain-ai/langchain/assets/122662504/677aa2d6-0a29-45e4-882a-db2bbf46d02b">

It is due to non-rendering images in one case, and output spamming. 

Clean these, along with other cases of excessing output spamming in
docs.

All get sucked into chat-langchain for retrieval.
2024-03-24 23:47:38 -07:00
Erick Friis
b617085af0 mistralai[patch]: streaming tool calls (#19469) 2024-03-23 19:24:53 +00:00
aditya thomas
b43a9d5808 docs: adding voyageai to the list of partner packages (#19376)
**Description:** Adding VoyageAI to the list of partners
**Issue:** A standalone langchain-voyageai package has been added
**Dependencies:** None
2024-03-22 17:08:15 -07:00
Zeeland
2549df00cd docs: fix error bilibili url (#19375)
Thank you for contributing to LangChain!

bilibili-api-python use https://github.com/Nemo2011/bilibili-api repo.
Change to the correct address.

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-03-22 17:06:17 -07:00
aditya thomas
375ab7bf59 docs: update module imports for fireworks documentation (#19377)
**Description:** Update module imports for Fireworks documentation
**Issue:** Module imports not present or in incorrect location
**Dependencies:** None
2024-03-22 17:05:27 -07:00
aditya thomas
0cc0467267 docs: update import paths and move to lcel for llama.cpp examples (#19391)
**Description:** Update import paths and move to lcel for llama.cpp
examples
**Issue:** Update import paths to reflect package refactoring and move
chains to LCEL in examples
**Dependencies:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-23 00:04:12 +00:00
fengjial
3b52ee05d1 community[patch]: fix bugs in baiduvectordb as vectorstore (#19380)
fix small bugs in vectorstore/baiduvectordb
2024-03-22 17:03:59 -07:00
Cailin Wang
5402aef32e docs: Add partition parameter to DashVector (#19385)
**Description**: Add `partition` parameter to DashVector
dashvector.ipynb
**Related PR**: https://github.com/langchain-ai/langchain/pull/19023
**Twitter handle**: @CailinWang_

---------

Co-authored-by: root <root@Bluedot-AI>
2024-03-22 17:00:29 -07:00
aditya thomas
515aab3312 community[patch]: invoke callback prior to yielding token (openai) (#19389)
**Description:** Invoke callback prior to yielding token for BaseOpenAI
& OpenAIChat
**Issue:** [Callback for on_llm_new_token should be invoked before the
token is yielded by the model
#16913](https://github.com/langchain-ai/langchain/issues/16913)
**Dependencies:** None
2024-03-22 16:45:55 -07:00
aditya thomas
49e932cd24 community[patch]: invoke callback prior to yielding token (fireworks) (#19388)
**Description:** Invoke callback prior to yielding token for Fireworks
**Issue:** [Callback for on_llm_new_token should be invoked before the
token is yielded by the model
#16913](https://github.com/langchain-ai/langchain/issues/16913)
**Dependencies:** None
2024-03-22 16:44:06 -07:00
aditya thomas
16ef88a87d docs: moving FireworksEmbeddings documentation to docs folder (#19398)
**Description:** Moving FireworksEmbeddings documentation to the
location docs/integration/text_embedding/ from langchain_fireworks/docs/
**Issue:** FireworksEmbeddings documentation was not in the correct
location
**Dependencies:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-22 23:24:22 +00:00
Leonid Ganeline
06190063e7 infra: makefile api_docs_clean fix (#19405)
Fixed a Makefile command that cleans up the api_docs
2024-03-22 15:45:55 -07:00
Christophe Bornet
1b813fe6fe langchain[patch]: Add async methods to VectorStoreRetrieverMemory (#19408) 2024-03-22 15:44:24 -07:00
Tarun Jain
ef6d3d66d6 community[patch]: docarray requires hnsw installation (#19416)
I have a small dataset, and I tried to use docarray:
``DocArrayHnswSearch ``. But when I execute, it returns:

```bash
    raise ImportError(
ImportError: Could not import docarray python package. Please install it with `pip install "langchain[docarray]"`.
```

Instead of docarray it needs to be 

```bash
docarray[hnswlib]
```

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-22 22:39:07 +00:00
German Swan
d4dc98a9f9 community[patch]: RecursiveUrlLoader: add base_url option (#19421)
RecursiveUrlLoader does not currently provide an option to set
`base_url` other than the `url`, though it uses a function with such an
option.
For example, this causes it unable to parse the
`https://python.langchain.com/docs`, as it returns the 404 page, and
`https://python.langchain.com/docs/get_started/introduction` has no
child routes to parse.
`base_url` allows setting the `https://python.langchain.com/docs` to
filter by, while the starting URL is anything inside, that contains
relevant links to continue crawling.
I understand that for this case, the docusaurus loader could be used,
but it's a common issue with many websites.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-22 15:34:31 -07:00
Erick Friis
e71daa7a03 openai[patch]: add test coverage to output (#19462) 2024-03-22 15:33:10 -07:00
igeni
4babefcb2f cli[patch]: Modified regular expression (#19449)
- **Description:** Modified regular expression to add support for
unicode chars and simplify pattern

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-22 15:24:08 -07:00
Ray Bell
7d36ee38b7 docs: point to titantic dataset on web (#19455)
Updated `pd.read_csv("titantic.csv")` to
`pd.read_csv("https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv")`
i.e. it will read it
https://raw.githubusercontent.com/pandas-dev/pandas/main/doc/data/titanic.csv
and allow anyone to run the code.

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-22 22:22:41 +00:00
Ray Bell
f959fad56e docs: use invoke instead of run (#19457)
Updated the deprecated run with invoke

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-22 15:08:26 -07:00
Bagatur
d93d49bc43 openai[patch]: tool use integration test (#19460) 2024-03-22 14:49:54 -07:00
Erick Friis
a99e644913 openai[patch]: integration test structured output (#19459) 2024-03-22 21:43:24 +00:00
Erick Friis
ac57123f40 openai[patch]: release 0.1.1 (#19458) 2024-03-22 21:36:21 +00:00
Luca Dorigo
47cfbe7522 openai[patch]: [URGENT REGRESSION FIX] Don't fail if tool message already doesn't contain name (#19435)
- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-22 14:33:50 -07:00
aditya thomas
bc028294d0 docs: delete mistralai embeddings doc from incorrect location (#19432)
**Description:** Delete MistralAIEmbeddings usage document from folder
partners/mistralai/docs
**Issue:** The document is present in the folder docs/docs
**Dependencies:** None
2024-03-22 14:02:59 -07:00
Erick Friis
11e37943ed mistralai[patch]: fix core version (#19454) 2024-03-22 20:48:13 +00:00
Erick Friis
3b093160c4 mistralai[patch]: release 0.1.0rc1 (#19453) 2024-03-22 20:34:36 +00:00
aditya thomas
4856a87261 community[patch]: invoke callback prior to yielding token (llama.cpp) (#19392)
**Description:** Invoke callback prior to yielding token for llama.cpp
**Issue:** [Callback for on_llm_new_token should be invoked before the
token is yielded by the model
#16913](https://github.com/langchain-ai/langchain/issues/16913)
**Dependencies:** None
2024-03-22 16:17:56 -04:00
ccurme
c4599444ee mistralai: update tool calling (#19451)
```python
from langchain.agents import tool
from langchain_mistralai import ChatMistralAI


llm = ChatMistralAI(model="mistral-large-latest", temperature=0)

@tool
def get_word_length(word: str) -> int:
    """Returns the length of a word."""
    return len(word)


tools = [get_word_length]
llm_with_tools = llm.bind_tools(tools)

llm_with_tools.invoke("how long is the word chrysanthemum")
```
currently raises
```
AttributeError: 'dict' object has no attribute 'model_dump'
```

Same with `.with_structured_output`
```python
from langchain_mistralai import ChatMistralAI
from langchain_core.pydantic_v1 import BaseModel

class AnswerWithJustification(BaseModel):
    """An answer to the user question along with justification for the answer."""
    answer: str
    justification: str

llm = ChatMistralAI(model="mistral-large-latest", temperature=0)
structured_llm = llm.with_structured_output(AnswerWithJustification)

structured_llm.invoke("What weighs more a pound of bricks or a pound of feathers")
```

This appears to fix.
2024-03-22 16:03:48 -04:00
Erick Friis
cceaca3e4f cookbook[patch]: add strip of quotes (#19452) 2024-03-22 19:10:39 +00:00
ccurme
8a2528c34a [langchain] fix OpenAIAssistantRunnable.create_assistant (#19081)
- **Description:** OpenAI assistants support some pre-built tools (e.g.,
`"retrieval"` and `"code_interpreter"`) and expect these as `{"type":
"code_interpreter"}`. This may have been upset by
https://github.com/langchain-ai/langchain/pull/18935
- **Issue:** https://github.com/langchain-ai/langchain/issues/19057
2024-03-22 13:23:19 -04:00
Harrison Chase
b40c80007f core[minor]: Add utility code to create tool examples (#18602)
Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-03-22 13:17:40 -04:00
Erick Friis
53ac1ebbbc mistralai[minor]: 0.1.0rc0, remove mistral sdk (#19420) 2024-03-22 01:24:58 +00:00
William FH
e980c14d6a core[patch]: allow "placeholder" type in from_messages tuples (#19152)
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-21 22:09:24 +00:00
billytrend-cohere
f6bcd42421 community[patch]: Replace positional argument with text=text for cohere>=5 compatibility (#19407)
- **Description:** Replace positional argument with text=text for
cohere>=5 compatibility
2024-03-21 10:42:51 -07:00
enfeng
b20c2640da anthropic[patch]: update base_url of anthropic (#18634)
A small change ~

- [ ] **update base_url**: "package: langchain_anthropic"

---------

Co-authored-by: yangenfeng <yangenfeng@xiaoniangao.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-03-20 21:04:55 -07:00
Erick Friis
a9cda536ad openai[patch]: fix core min version (#19366) 2024-03-20 15:38:29 -07:00
Erick Friis
0b20c098df openai[patch]: fix name param (#19365) 2024-03-20 22:22:09 +00:00
Erick Friis
f6c8700326 openai[patch]: release 0.1.0, message id and name support (#19363) 2024-03-20 15:11:39 -07:00
Bagatur
3fa711dce0 experimental[patch]: Release 0.0.55 (#19353) 2024-03-20 13:06:39 -07:00
Erick Friis
2bcd760c46 robocorp[patch]: run integration tests on release (#19358) 2024-03-20 19:31:12 +00:00
Erick Friis
a031c183ae robocorp[patch]: release 0.0.4 (#19357) 2024-03-20 12:28:41 -07:00
4231 changed files with 268995 additions and 157126 deletions

View File

@@ -12,7 +12,7 @@
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
"workspaceFolder": "/workspaces/langchain",
// Prevent the container from shutting down
"overrideCommand": true

View File

@@ -6,7 +6,7 @@ services:
context: ..
volumes:
# Update this to wherever you want VS Code to mount the folder of your project
- ..:/workspaces:cached
- ..:/workspaces/langchain:cached
networks:
- langchain-network
# environment:

View File

@@ -26,6 +26,13 @@ body:
[LangChain Github Discussions](https://github.com/langchain-ai/langchain/discussions),
[LangChain Github Issues](https://github.com/langchain-ai/langchain/issues?q=is%3Aissue),
[LangChain ChatBot](https://chat.langchain.com/)
- type: input
id: url
attributes:
label: URL
description: URL to documentation
validations:
required: false
- type: checkboxes
id: checks
attributes:
@@ -48,4 +55,4 @@ body:
label: "Idea or request for content:"
description: >
Please describe as clearly as possible what topics you think are missing
from the current documentation.
from the current documentation.

View File

@@ -26,4 +26,4 @@ Additional guidelines:
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in langchain.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, hwchase17.
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.

View File

@@ -537,7 +537,9 @@ if __name__ == "__main__":
"nfcampos",
"efriis",
"eyurtsev",
"rlancemartin"
"rlancemartin",
"ccurme",
"vbarda",
}
hidden_logins = {
"dev2049",

View File

@@ -6,8 +6,8 @@ from typing import Dict
LANGCHAIN_DIRS = [
"libs/core",
"libs/text-splitters",
"libs/community",
"libs/langchain",
"libs/community",
"libs/experimental",
]
@@ -19,6 +19,7 @@ if __name__ == "__main__":
"test": set(),
"extended-test": set(),
}
docs_edited = False
if len(files) == 300:
# max diff length is 300 files - there are likely files missing
@@ -47,6 +48,17 @@ if __name__ == "__main__":
found = True
if found:
dirs_to_run["extended-test"].add(dir_)
elif file.startswith("libs/standard-tests"):
# TODO: update to include all packages that rely on standard-tests (all partner packages)
# note: won't run on external repo partners
dirs_to_run["lint"].add("libs/standard-tests")
dirs_to_run["test"].add("libs/partners/mistralai")
dirs_to_run["test"].add("libs/partners/openai")
dirs_to_run["test"].add("libs/partners/anthropic")
dirs_to_run["test"].add("libs/partners/ai21")
dirs_to_run["test"].add("libs/partners/fireworks")
dirs_to_run["test"].add("libs/partners/groq")
elif file.startswith("libs/cli"):
# todo: add cli makefile
pass
@@ -65,6 +77,8 @@ if __name__ == "__main__":
"an update for this new library!"
)
elif any(file.startswith(p) for p in ["docs/", "templates/", "cookbook/"]):
if file.startswith("docs/"):
docs_edited = True
dirs_to_run["lint"].add(".")
outputs = {
@@ -73,7 +87,8 @@ if __name__ == "__main__":
),
"dirs-to-test": list(dirs_to_run["test"] | dirs_to_run["extended-test"]),
"dirs-to-extended-test": list(dirs_to_run["extended-test"]),
"docs-edited": "true" if docs_edited else "",
}
for key, value in outputs.items():
json_output = json.dumps(value)
print(f"{key}={json_output}") # noqa: T201
print(f"{key}={json_output}")

View File

@@ -13,13 +13,16 @@ MIN_VERSION_LIBS = [
def get_min_version(version: str) -> str:
# base regex for x.x.x with cases for rc/post/etc
# valid strings: https://peps.python.org/pep-0440/#public-version-identifiers
vstring = r"\d+(?:\.\d+){0,2}(?:(?:a|b|rc|\.post|\.dev)\d+)?"
# case ^x.x.x
_match = re.match(r"^\^(\d+(?:\.\d+){0,2})$", version)
_match = re.match(f"^\\^({vstring})$", version)
if _match:
return _match.group(1)
# case >=x.x.x,<y.y.y
_match = re.match(r"^>=(\d+(?:\.\d+){0,2}),<(\d+(?:\.\d+){0,2})$", version)
_match = re.match(f"^>=({vstring}),<({vstring})$", version)
if _match:
_min = _match.group(1)
_max = _match.group(2)
@@ -27,7 +30,7 @@ def get_min_version(version: str) -> str:
return _min
# case x.x.x
_match = re.match(r"^(\d+(?:\.\d+){0,2})$", version)
_match = re.match(f"^({vstring})$", version)
if _match:
return _match.group(1)
@@ -52,6 +55,9 @@ def get_min_version_from_toml(toml_path: str):
# Get the version string
version_string = dependencies[lib]
if isinstance(version_string, dict):
version_string = version_string["version"]
# Use parse_version to get the minimum supported version from version_string
min_version = get_min_version(version_string)
@@ -70,4 +76,4 @@ if __name__ == "__main__":
print(
" ".join([f"{lib}=={version}" for lib, version in min_versions.items()])
) # noqa: T201
)

7
.github/workflows/.codespell-exclude vendored Normal file
View File

@@ -0,0 +1,7 @@
libs/community/langchain_community/llms/yuan2.py
"NotIn": "not in",
- `/checkin`: Check-in
docs/docs/integrations/providers/trulens.mdx
self.assertIn(
from trulens_eval import Tru
tru = Tru()

View File

@@ -58,6 +58,7 @@ jobs:
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
NVIDIA_API_KEY: ${{ secrets.NVIDIA_API_KEY }}
GOOGLE_SEARCH_API_KEY: ${{ secrets.GOOGLE_SEARCH_API_KEY }}
GOOGLE_CSE_ID: ${{ secrets.GOOGLE_CSE_ID }}
@@ -76,6 +77,8 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
COHERE_API_KEY: ${{ secrets.COHERE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
run: |
make integration_tests

View File

@@ -13,6 +13,11 @@ on:
required: true
type: string
default: 'libs/langchain'
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
env:
PYTHON_VERSION: "3.11"
@@ -20,7 +25,7 @@ env:
jobs:
build:
if: github.ref == 'refs/heads/master'
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
environment: Scheduled testing
runs-on: ubuntu-latest
@@ -67,19 +72,78 @@ jobs:
run: |
echo pkg-name="$(poetry version | cut -d ' ' -f 1)" >> $GITHUB_OUTPUT
echo version="$(poetry version --short)" >> $GITHUB_OUTPUT
release-notes:
needs:
- build
runs-on: ubuntu-latest
outputs:
release-body: ${{ steps.generate-release-body.outputs.release-body }}
steps:
- uses: actions/checkout@v4
with:
repository: langchain-ai/langchain
path: langchain
sparse-checkout: | # this only grabs files for relevant dir
${{ inputs.working-directory }}
ref: master # this scopes to just master branch
fetch-depth: 0 # this fetches entire commit history
- name: Check Tags
id: check-tags
shell: bash
working-directory: langchain/${{ inputs.working-directory }}
env:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
run: |
REGEX="^$PKG_NAME==\\d+\\.\\d+\\.\\d+\$"
echo $REGEX
PREV_TAG=$(git tag --sort=-creatordate | grep -P $REGEX || true | head -1)
TAG="${PKG_NAME}==${VERSION}"
if [ "$TAG" == "$PREV_TAG" ]; then
echo "No new version to release"
exit 1
fi
echo tag="$TAG" >> $GITHUB_OUTPUT
echo prev-tag="$PREV_TAG" >> $GITHUB_OUTPUT
- name: Generate release body
id: generate-release-body
working-directory: langchain
env:
WORKING_DIR: ${{ inputs.working-directory }}
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
TAG: ${{ steps.check-tags.outputs.tag }}
PREV_TAG: ${{ steps.check-tags.outputs.prev-tag }}
run: |
PREAMBLE="Changes since $PREV_TAG"
# if PREV_TAG is empty, then we are releasing the first version
if [ -z "$PREV_TAG" ]; then
PREAMBLE="Initial release"
PREV_TAG=$(git rev-list --max-parents=0 HEAD)
fi
{
echo 'release-body<<EOF'
echo "# Release $TAG"
echo $PREAMBLE
echo
git log --format="%s" "$PREV_TAG"..HEAD -- $WORKING_DIR
echo EOF
} >> "$GITHUB_OUTPUT"
test-pypi-publish:
needs:
- build
- release-notes
uses:
./.github/workflows/_test_release.yml
with:
working-directory: ${{ inputs.working-directory }}
dangerous-nonmaster-release: ${{ inputs.dangerous-nonmaster-release }}
secrets: inherit
pre-release-checks:
needs:
- build
- release-notes
- test-pypi-publish
runs-on: ubuntu-latest
steps:
@@ -112,7 +176,7 @@ jobs:
PKG_NAME: ${{ needs.build.outputs.pkg-name }}
VERSION: ${{ needs.build.outputs.version }}
# Here we use:
# - The default regular PyPI index as the *primary* index, meaning
# - The default regular PyPI index as the *primary* index, meaning
# that it takes priority (https://pypi.org/simple)
# - The test PyPI index as an extra index, so that any dependencies that
# are not found on test PyPI can be resolved and installed anyway.
@@ -171,7 +235,7 @@ jobs:
env:
MIN_VERSIONS: ${{ steps.min-version.outputs.min-versions }}
run: |
poetry run pip install $MIN_VERSIONS
poetry run pip install --force-reinstall $MIN_VERSIONS --editable .
make tests
working-directory: ${{ inputs.working-directory }}
@@ -215,12 +279,15 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # for airbyte
MONGODB_ATLAS_URI: ${{ secrets.MONGODB_ATLAS_URI }}
VOYAGE_API_KEY: ${{ secrets.VOYAGE_API_KEY }}
UPSTAGE_API_KEY: ${{ secrets.UPSTAGE_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
run: make integration_tests
working-directory: ${{ inputs.working-directory }}
publish:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
runs-on: ubuntu-latest
@@ -262,6 +329,7 @@ jobs:
mark-release:
needs:
- build
- release-notes
- test-pypi-publish
- pre-release-checks
- publish
@@ -290,14 +358,14 @@ jobs:
with:
name: dist
path: ${{ inputs.working-directory }}/dist/
- name: Create Release
- name: Create Tag
uses: ncipollo/release-action@v1
if: ${{ inputs.working-directory == 'libs/langchain' }}
with:
artifacts: "dist/*"
token: ${{ secrets.GITHUB_TOKEN }}
draft: false
generateReleaseNotes: true
tag: v${{ needs.build.outputs.version }}
commit: master
generateReleaseNotes: false
tag: ${{needs.build.outputs.pkg-name}}==${{ needs.build.outputs.version }}
body: ${{ needs.release-notes.outputs.release-body }}
commit: ${{ github.sha }}
makeLatest: ${{ needs.build.outputs.pkg-name == 'langchain-core'}}

50
.github/workflows/_test_doc_imports.yml vendored Normal file
View File

@@ -0,0 +1,50 @@
name: test_doc_imports
on:
workflow_call:
env:
POETRY_VERSION: "1.7.1"
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.11"
name: "check doc imports #${{ matrix.python-version }}"
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }} + Poetry ${{ env.POETRY_VERSION }}
uses: "./.github/actions/poetry_setup"
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
cache-key: core
- name: Install dependencies
shell: bash
run: poetry install --with test
- name: Install langchain editable
run: |
poetry run pip install -e libs/core libs/langchain libs/community libs/experimental
- name: Check doc imports
shell: bash
run: |
poetry run python docs/scripts/check_imports.py
- name: Ensure the test did not create any additional files
shell: bash
run: |
set -eu
STATUS="$(git status)"
echo "$STATUS"
# grep will exit non-zero if the target message isn't found,
# and `set -e` above will cause the step to fail.
echo "$STATUS" | grep 'nothing to commit, working tree clean'

View File

@@ -7,6 +7,11 @@ on:
required: true
type: string
description: "From which folder this pipeline executes"
dangerous-nonmaster-release:
required: false
type: boolean
default: false
description: "Release from a non-master branch (danger!)"
env:
POETRY_VERSION: "1.7.1"
@@ -14,7 +19,7 @@ env:
jobs:
build:
if: github.ref == 'refs/heads/master'
if: github.ref == 'refs/heads/master' || inputs.dangerous-nonmaster-release
runs-on: ubuntu-latest
outputs:

View File

@@ -36,6 +36,7 @@ jobs:
dirs-to-lint: ${{ steps.set-matrix.outputs.dirs-to-lint }}
dirs-to-test: ${{ steps.set-matrix.outputs.dirs-to-test }}
dirs-to-extended-test: ${{ steps.set-matrix.outputs.dirs-to-extended-test }}
docs-edited: ${{ steps.set-matrix.outputs.docs-edited }}
lint:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
@@ -60,6 +61,12 @@ jobs:
working-directory: ${{ matrix.working-directory }}
secrets: inherit
test-doc-imports:
needs: [ build ]
if: ${{ needs.build.outputs.dirs-to-test != '[]' || needs.build.outputs.docs-edited }}
uses: ./.github/workflows/_test_doc_imports.yml
secrets: inherit
compile-integration-tests:
name: cd ${{ matrix.working-directory }}
needs: [ build ]
@@ -134,7 +141,7 @@ jobs:
echo "$STATUS" | grep 'nothing to commit, working tree clean'
ci_success:
name: "CI Success"
needs: [build, lint, test, compile-integration-tests, dependencies, extended-tests]
needs: [build, lint, test, compile-integration-tests, dependencies, extended-tests, test-doc-imports]
if: |
always()
runs-on: ubuntu-latest

View File

@@ -3,9 +3,9 @@ name: CI / cd . / make spell_check
on:
push:
branches: [master]
branches: [master, v0.1]
pull_request:
branches: [master]
branches: [master, v0.1]
permissions:
contents: read
@@ -29,9 +29,9 @@ jobs:
python .github/workflows/extract_ignored_words_list.py
id: extract_ignore_words
- name: Codespell
uses: codespell-project/actions-codespell@v2
with:
skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
exclude_file: libs/community/langchain_community/llms/yuan2.py
# - name: Codespell
# uses: codespell-project/actions-codespell@v2
# with:
# skip: guide_imports.json,*.ambr,./cookbook/data/imdb_top_1000.csv,*.lock
# ignore_words_list: ${{ steps.extract_ignore_words.outputs.ignore_words_list }}
# exclude_file: ./.github/workflows/codespell-exclude

View File

@@ -7,4 +7,4 @@ ignore_words_list = (
pyproject_toml.get("tool", {}).get("codespell", {}).get("ignore-words-list")
)
print(f"::set-output name=ignore_words_list::{ignore_words_list}") # noqa: T201
print(f"::set-output name=ignore_words_list::{ignore_words_list}")

View File

@@ -10,19 +10,22 @@ env:
jobs:
build:
defaults:
run:
working-directory: libs/langchain
runs-on: ubuntu-latest
environment: Scheduled testing
strategy:
fail-fast: false
matrix:
python-version:
- "3.8"
- "3.9"
- "3.10"
- "3.11"
name: Python ${{ matrix.python-version }}
working-directory:
- "libs/partners/openai"
- "libs/partners/anthropic"
- "libs/partners/ai21"
- "libs/partners/fireworks"
- "libs/partners/groq"
- "libs/partners/mistralai"
- "libs/partners/together"
name: Python ${{ matrix.python-version }} - ${{ matrix.working-directory }}
steps:
- uses: actions/checkout@v4
@@ -31,7 +34,7 @@ jobs:
with:
python-version: ${{ matrix.python-version }}
poetry-version: ${{ env.POETRY_VERSION }}
working-directory: libs/langchain
working-directory: ${{ matrix.working-directory }}
cache-key: scheduled
- name: 'Authenticate to Google Cloud'
@@ -40,26 +43,15 @@ jobs:
with:
credentials_json: '${{ secrets.GOOGLE_CREDENTIALS }}'
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.AWS_REGION }}
- name: Install dependencies
working-directory: libs/langchain
working-directory: ${{ matrix.working-directory }}
shell: bash
run: |
echo "Running scheduled tests, installing dependencies with poetry..."
poetry install --with=test_integration,test
- name: Install deps outside pyproject
if: ${{ startsWith(inputs.working-directory, 'libs/community/') }}
shell: bash
run: poetry run pip install "boto3<2" "google-cloud-aiplatform<2"
- name: Run tests
- name: Run integration tests
working-directory: ${{ matrix.working-directory }}
shell: bash
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
@@ -70,11 +62,16 @@ jobs:
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_CHAT_DEPLOYMENT_NAME }}
AZURE_OPENAI_LLM_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_LLM_DEPLOYMENT_NAME }}
AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME: ${{ secrets.AZURE_OPENAI_EMBEDDINGS_DEPLOYMENT_NAME }}
AI21_API_KEY: ${{ secrets.AI21_API_KEY }}
FIREWORKS_API_KEY: ${{ secrets.FIREWORKS_API_KEY }}
GROQ_API_KEY: ${{ secrets.GROQ_API_KEY }}
MISTRAL_API_KEY: ${{ secrets.MISTRAL_API_KEY }}
TOGETHER_API_KEY: ${{ secrets.TOGETHER_API_KEY }}
run: |
make scheduled_tests
make integration_test
- name: Ensure the tests did not create any additional files
working-directory: ${{ matrix.working-directory }}
shell: bash
run: |
set -eu

1
.gitignore vendored
View File

@@ -178,3 +178,4 @@ _dist
docs/docs/templates
prof
virtualenv/

View File

@@ -1,44 +1,57 @@
.PHONY: all clean docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck
.PHONY: all clean help docs_build docs_clean docs_linkcheck api_docs_build api_docs_clean api_docs_linkcheck spell_check spell_fix lint lint_package lint_tests format format_diff
# Default target executed when no arguments are given to make.
## help: Show this help info.
help: Makefile
@printf "\n\033[1mUsage: make <TARGETS> ...\033[0m\n\n\033[1mTargets:\033[0m\n\n"
@sed -n 's/^## //p' $< | awk -F':' '{printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' | sort | sed -e 's/^/ /'
## all: Default target, shows help.
all: help
## clean: Clean documentation and API documentation artifacts.
clean: docs_clean api_docs_clean
######################
# DOCUMENTATION
######################
clean: docs_clean api_docs_clean
## docs_build: Build the documentation.
docs_build:
docs/.local_build.sh
cd docs && make build
## docs_clean: Clean the documentation build artifacts.
docs_clean:
@if [ -d _dist ]; then \
rm -r _dist; \
echo "Directory _dist has been cleaned."; \
else \
echo "Nothing to clean."; \
fi
cd docs && make clean
## docs_linkcheck: Run linkchecker on the documentation.
docs_linkcheck:
poetry run linkchecker _dist/docs/ --ignore-url node_modules
## api_docs_build: Build the API Reference documentation.
api_docs_build:
poetry run python docs/api_reference/create_api_rst.py
cd docs/api_reference && poetry run make html
api_docs_clean:
rm -f docs/api_reference/api_reference.rst
cd docs/api_reference && poetry run make clean
api_docs_quick_preview:
poetry run python docs/api_reference/create_api_rst.py text-splitters
cd docs/api_reference && poetry run make html
open docs/api_reference/_build/html/text_splitters_api_reference.html
## api_docs_clean: Clean the API Reference documentation build artifacts.
api_docs_clean:
find ./docs/api_reference -name '*_api_reference.rst' -delete
git clean -fdX ./docs/api_reference
## api_docs_linkcheck: Run linkchecker on the API Reference documentation.
api_docs_linkcheck:
poetry run linkchecker docs/api_reference/_build/html/index.html
## spell_check: Run codespell on the project.
spell_check:
poetry run codespell --toml pyproject.toml
## spell_fix: Run codespell on the project and fix the errors.
spell_fix:
poetry run codespell --toml pyproject.toml -w
@@ -46,31 +59,14 @@ spell_fix:
# LINTING AND FORMATTING
######################
## lint: Run linting on the project.
lint lint_package lint_tests:
poetry run ruff docs templates cookbook
poetry run ruff check docs templates cookbook
poetry run ruff format docs templates cookbook --diff
poetry run ruff --select I docs templates cookbook
poetry run ruff check --select I docs templates cookbook
git grep 'from langchain import' docs/docs templates cookbook | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
## format: Format the project files.
format format_diff:
poetry run ruff format docs templates cookbook
poetry run ruff --select I --fix docs templates cookbook
######################
# HELP
######################
help:
@echo '===================='
@echo '-- DOCUMENTATION --'
@echo 'clean - run docs_clean and api_docs_clean'
@echo 'docs_build - build the documentation'
@echo 'docs_clean - clean the documentation build artifacts'
@echo 'docs_linkcheck - run linkchecker on the documentation'
@echo 'api_docs_build - build the API Reference documentation'
@echo 'api_docs_clean - clean the API Reference documentation build artifacts'
@echo 'api_docs_linkcheck - run linkchecker on the API Reference documentation'
@echo 'spell_check - run codespell on the project'
@echo 'spell_fix - run codespell on the project and fix the errors'
@echo '-- TEST and LINT tasks are within libs/*/ per-package --'
poetry run ruff check --select I --fix docs templates cookbook

View File

@@ -4,7 +4,7 @@
[![Release Notes](https://img.shields.io/github/release/langchain-ai/langchain)](https://github.com/langchain-ai/langchain/releases)
[![CI](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml/badge.svg)](https://github.com/langchain-ai/langchain/actions/workflows/check_diffs.yml)
[![Downloads](https://static.pepy.tech/badge/langchain/month)](https://pepy.tech/project/langchain)
[![Downloads](https://static.pepy.tech/badge/langchain-core/month)](https://pepy.tech/project/langchain-core)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
[![](https://dcbadge.vercel.app/api/server/6adMQxSpJS?compact=true&style=flat)](https://discord.gg/6adMQxSpJS)
@@ -34,34 +34,40 @@ conda install langchain -c conda-forge
## 🤔 What is LangChain?
**LangChain** is a framework for developing applications powered by language models. It enables applications that:
- **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
- **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
**LangChain** is a framework for developing applications powered by large language models (LLMs).
This framework consists of several parts.
- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
- **[LangChain Templates](templates)**: A collection of easily deployable reference architectures for a wide variety of tasks.
- **[LangServe](https://github.com/langchain-ai/langserve)**: A library for deploying LangChain chains as a REST API.
- **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
- **[LangGraph](https://python.langchain.com/docs/langgraph)**: LangGraph is a library for building stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain. It extends the LangChain Expression Language with the ability to coordinate multiple chains (or actors) across multiple steps of computation in a cyclic manner.
For these applications, LangChain simplifies the entire application lifecycle:
The LangChain libraries themselves are made up of several different packages.
- **[`langchain-core`](libs/core)**: Base abstractions and LangChain Expression Language.
- **[`langchain-community`](libs/community)**: Third party integrations.
- **[`langchain`](libs/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **Open-source libraries**: Build your applications using LangChain's [modular building blocks](https://python.langchain.com/docs/expression_language/) and [components](https://python.langchain.com/docs/modules/). Integrate with hundreds of [third-party providers](https://python.langchain.com/docs/integrations/platforms/).
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://python.langchain.com/docs/langsmith/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn any chain into a REST API with [LangServe](https://python.langchain.com/docs/langserve).
### Open-source libraries
- **`langchain-core`**: Base abstractions and LangChain Expression Language.
- **`langchain-community`**: Third party integrations.
- Some integrations have been further split into **partner packages** that only rely on **`langchain-core`**. Examples include **`langchain_openai`** and **`langchain_anthropic`**.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **[`LangGraph`](https://python.langchain.com/docs/langgraph)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
### Productionization:
- **[LangSmith](https://python.langchain.com/docs/langsmith)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
### Deployment:
- **[LangServe](https://python.langchain.com/docs/langserve)**: A library for deploying LangChain chains as REST APIs.
![Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers.](docs/static/svg/langchain_stack.svg "LangChain Architecture Overview")
## 🧱 What can you build with LangChain?
**❓ Retrieval augmented generation**
**❓ Question answering with RAG**
- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
- End-to-end Example: [Chat LangChain](https://chat.langchain.com) and [repo](https://github.com/langchain-ai/chat-langchain)
**💬 Analyzing structured data**
**🧱 Extracting structured output**
- [Documentation](https://python.langchain.com/docs/use_cases/qa_structured/sql)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain/tree/master/templates/sql-llama2)
- [Documentation](https://python.langchain.com/docs/use_cases/extraction/)
- End-to-end Example: [SQL Llama2 Template](https://github.com/langchain-ai/langchain-extract/)
**🤖 Chatbots**
@@ -72,34 +78,51 @@ And much more! Head to the [Use cases](https://python.langchain.com/docs/use_cas
## 🚀 How does LangChain help?
The main value props of the LangChain libraries are:
1. **Components**: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
1. **Components**: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
## LangChain Expression Language (LCEL)
LCEL is the foundation of many of LangChain's components, and is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
- **[Overview](https://python.langchain.com/docs/expression_language/)**: LCEL and its benefits
- **[Interface](https://python.langchain.com/docs/expression_language/interface)**: The standard interface for LCEL objects
- **[Primitives](https://python.langchain.com/docs/expression_language/primitives)**: More on the primitives LCEL includes
## Components
Components fall into the following **modules**:
**📃 Model I/O:**
This includes prompt management, prompt optimization, a generic interface for all LLMs, and common utilities for working with LLMs.
This includes [prompt management](https://python.langchain.com/docs/modules/model_io/prompts/), [prompt optimization](https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/), a generic interface for [chat models](https://python.langchain.com/docs/modules/model_io/chat/) and [LLMs](https://python.langchain.com/docs/modules/model_io/llms/), and common utilities for working with [model outputs](https://python.langchain.com/docs/modules/model_io/output_parsers/).
**📚 Retrieval:**
Data Augmented Generation involves specific types of chains that first interact with an external data source to fetch data for use in the generation step. Examples include summarization of long pieces of text and question/answering over specific data sources.
Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/modules/data_connection/document_loaders/) from a variety of sources, [preparing it](https://python.langchain.com/docs/modules/data_connection/document_loaders/), [then retrieving it](https://python.langchain.com/docs/modules/data_connection/retrievers/) for use in the generation step.
**🤖 Agents:**
Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents.
Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete done. LangChain provides a [standard interface for agents](https://python.langchain.com/docs/modules/agents/), a [selection of agents](https://python.langchain.com/docs/modules/agents/agent_types/) to choose from, and examples of end-to-end agents.
## 📖 Documentation
Please see [here](https://python.langchain.com) for full documentation, which includes:
- [Getting started](https://python.langchain.com/docs/get_started/introduction): installation, setting up the environment, simple examples
- Overview of the [interfaces](https://python.langchain.com/docs/expression_language/), [modules](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers)
- [Use case](https://python.langchain.com/docs/use_cases/qa_structured/sql) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/adapters/openai)
- [LangSmith](https://python.langchain.com/docs/langsmith/), [LangServe](https://python.langchain.com/docs/langserve), and [LangChain Template](https://python.langchain.com/docs/templates/) overviews
- [Reference](https://api.python.langchain.com): full API docs
- [Use case](https://python.langchain.com/docs/use_cases/) walkthroughs and best practice [guides](https://python.langchain.com/docs/guides/)
- Overviews of the [interfaces](https://python.langchain.com/docs/expression_language/), [components](https://python.langchain.com/docs/modules/), and [integrations](https://python.langchain.com/docs/integrations/providers)
You can also check out the full [API Reference docs](https://api.python.langchain.com).
## 🌐 Ecosystem
- [🦜🛠️ LangSmith](https://python.langchain.com/docs/langsmith/): Tracing and evaluating your language model applications and intelligent agents to help you move from prototype to production.
- [🦜🕸️ LangGraph](https://python.langchain.com/docs/langgraph): Creating stateful, multi-actor applications with LLMs, built on top of (and intended to be used with) LangChain primitives.
- [🦜🏓 LangServe](https://python.langchain.com/docs/langserve): Deploying LangChain runnables and chains as REST APIs.
- [LangChain Templates](https://python.langchain.com/docs/templates/): Example applications hosted with LangServe.
## 💁 Contributing

View File

@@ -38,9 +38,9 @@
"\n",
"To run locally, we use Ollama.ai. \n",
"\n",
"See [here](https://python.langchain.com/docs/integrations/chat/ollama) for details on installation and setup.\n",
"See [here](/docs/integrations/chat/ollama) for details on installation and setup.\n",
"\n",
"Also, see [here](https://python.langchain.com/docs/guides/local_llms) for our full guide on local LLMs.\n",
"Also, see [here](/docs/guides/development/local_llms) for our full guide on local LLMs.\n",
" \n",
"To use an external API, which is not private, we can use Replicate."
]

View File

@@ -464,8 +464,8 @@
" Check if the base64 data is an image by looking at the start of the data\n",
" \"\"\"\n",
" image_signatures = {\n",
" b\"\\xFF\\xD8\\xFF\": \"jpg\",\n",
" b\"\\x89\\x50\\x4E\\x47\\x0D\\x0A\\x1A\\x0A\": \"png\",\n",
" b\"\\xff\\xd8\\xff\": \"jpg\",\n",
" b\"\\x89\\x50\\x4e\\x47\\x0d\\x0a\\x1a\\x0a\": \"png\",\n",
" b\"\\x47\\x49\\x46\\x38\": \"gif\",\n",
" b\"\\x52\\x49\\x46\\x46\": \"webp\",\n",
" }\n",
@@ -604,7 +604,7 @@
"source": [
"# Check retrieval\n",
"query = \"Give me company names that are interesting investments based on EV / NTM and NTM rev growth. Consider EV / NTM multiples vs historical?\"\n",
"docs = retriever_multi_vector_img.get_relevant_documents(query, limit=6)\n",
"docs = retriever_multi_vector_img.invoke(query, limit=6)\n",
"\n",
"# We get 4 docs\n",
"len(docs)"
@@ -630,7 +630,7 @@
"source": [
"# Check retrieval\n",
"query = \"What are the EV / NTM and NTM rev growth for MongoDB, Cloudflare, and Datadog?\"\n",
"docs = retriever_multi_vector_img.get_relevant_documents(query, limit=6)\n",
"docs = retriever_multi_vector_img.invoke(query, limit=6)\n",
"\n",
"# We get 4 docs\n",
"len(docs)"

View File

@@ -185,7 +185,7 @@
" )\n",
" # Text summary chain\n",
" model = VertexAI(\n",
" temperature=0, model_name=\"gemini-pro\", max_output_tokens=1024\n",
" temperature=0, model_name=\"gemini-pro\", max_tokens=1024\n",
" ).with_fallbacks([empty_response])\n",
" summarize_chain = {\"element\": lambda x: x} | prompt | model | StrOutputParser()\n",
"\n",
@@ -254,9 +254,9 @@
"\n",
"def image_summarize(img_base64, prompt):\n",
" \"\"\"Make image summary\"\"\"\n",
" model = ChatVertexAI(model_name=\"gemini-pro-vision\", max_output_tokens=1024)\n",
" model = ChatVertexAI(model=\"gemini-pro-vision\", max_tokens=1024)\n",
"\n",
" msg = model(\n",
" msg = model.invoke(\n",
" [\n",
" HumanMessage(\n",
" content=[\n",
@@ -462,8 +462,8 @@
" Check if the base64 data is an image by looking at the start of the data\n",
" \"\"\"\n",
" image_signatures = {\n",
" b\"\\xFF\\xD8\\xFF\": \"jpg\",\n",
" b\"\\x89\\x50\\x4E\\x47\\x0D\\x0A\\x1A\\x0A\": \"png\",\n",
" b\"\\xff\\xd8\\xff\": \"jpg\",\n",
" b\"\\x89\\x50\\x4e\\x47\\x0d\\x0a\\x1a\\x0a\": \"png\",\n",
" b\"\\x47\\x49\\x46\\x38\": \"gif\",\n",
" b\"\\x52\\x49\\x46\\x46\": \"webp\",\n",
" }\n",
@@ -553,9 +553,7 @@
" \"\"\"\n",
"\n",
" # Multi-modal LLM\n",
" model = ChatVertexAI(\n",
" temperature=0, model_name=\"gemini-pro-vision\", max_output_tokens=1024\n",
" )\n",
" model = ChatVertexAI(temperature=0, model_name=\"gemini-pro-vision\", max_tokens=1024)\n",
"\n",
" # RAG pipeline\n",
" chain = (\n",
@@ -604,7 +602,7 @@
],
"source": [
"query = \"What are the EV / NTM and NTM rev growth for MongoDB, Cloudflare, and Datadog?\"\n",
"docs = retriever_multi_vector_img.get_relevant_documents(query, limit=1)\n",
"docs = retriever_multi_vector_img.invoke(query, limit=1)\n",
"\n",
"# We get 2 docs\n",
"len(docs)"

View File

@@ -535,9 +535,9 @@
" print(f\"--Generated {len(all_clusters)} clusters--\")\n",
"\n",
" # Summarization\n",
" template = \"\"\"Here is a sub-set of LangChain Expression Langauge doc. \n",
" template = \"\"\"Here is a sub-set of LangChain Expression Language doc. \n",
" \n",
" LangChain Expression Langauge provides a way to compose chain in LangChain.\n",
" LangChain Expression Language provides a way to compose chain in LangChain.\n",
" \n",
" Give a detailed summary of the documentation provided.\n",
" \n",

View File

@@ -47,6 +47,7 @@ Notebook | Description
[press_releases.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/press_releases.ipynb) | Retrieve and query company press release data powered by [Kay.ai](https://kay.ai).
[program_aided_language_model.i...](https://github.com/langchain-ai/langchain/tree/master/cookbook/program_aided_language_model.ipynb) | Implement program-aided language models as described in the provided research paper.
[qa_citations.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/qa_citations.ipynb) | Different ways to get a model to cite its sources.
[rag_upstage_layout_analysis_groundedness_check.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/rag_upstage_layout_analysis_groundedness_check.ipynb) | End-to-end RAG example using Upstage Layout Analysis and Groundedness Check.
[retrieval_in_sql.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/retrieval_in_sql.ipynb) | Perform retrieval-augmented-generation (rag) on a PostgreSQL database using pgvector.
[sales_agent_with_context.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/sales_agent_with_context.ipynb) | Implement a context-aware ai sales agent, salesgpt, that can have natural sales conversations, interact with other systems, and use a product knowledge base to discuss a company's offerings.
[self_query_hotel_search.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/self_query_hotel_search.ipynb) | Build a hotel room search feature with self-querying retrieval, using a specific hotel recommendation dataset.
@@ -56,3 +57,4 @@ Notebook | Description
[two_agent_debate_tools.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_agent_debate_tools.ipynb) | Simulate multi-agent dialogues where the agents can utilize various tools.
[two_player_dnd.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/two_player_dnd.ipynb) | Simulate a two-player dungeons & dragons game, where a dialogue simulator class is used to coordinate the dialogue between the protagonist and the dungeon master.
[wikibase_agent.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/wikibase_agent.ipynb) | Create a simple wikibase agent that utilizes sparql generation, with testing done on http://wikidata.org.
[oracleai_demo.ipynb](https://github.com/langchain-ai/langchain/tree/master/cookbook/oracleai_demo.ipynb) | This guide outlines how to utilize Oracle AI Vector Search alongside Langchain for an end-to-end RAG pipeline, providing step-by-step examples. The process includes loading documents from various sources using OracleDocLoader, summarizing them either within or outside the database with OracleSummary, and generating embeddings similarly through OracleEmbeddings. It also covers chunking documents according to specific requirements using Advanced Oracle Capabilities from OracleTextSplitter, and finally, storing and indexing these documents in a Vector Store for querying with OracleVS.

View File

@@ -75,7 +75,7 @@
"\n",
"Apply to the [`LLaMA2`](https://arxiv.org/pdf/2307.09288.pdf) paper. \n",
"\n",
"We use the Unstructured [`partition_pdf`](https://unstructured-io.github.io/unstructured/bricks/partition.html#partition-pdf), which segments a PDF document by using a layout model. \n",
"We use the Unstructured [`partition_pdf`](https://unstructured-io.github.io/unstructured/core/partition.html#partition-pdf), which segments a PDF document by using a layout model. \n",
"\n",
"This layout model makes it possible to extract elements, such as tables, from pdfs. \n",
"\n",

View File

@@ -562,9 +562,7 @@
],
"source": [
"# We can retrieve this table\n",
"retriever.get_relevant_documents(\n",
" \"What are results for LLaMA across across domains / subjects?\"\n",
")[1]"
"retriever.invoke(\"What are results for LLaMA across across domains / subjects?\")[1]"
]
},
{
@@ -614,9 +612,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\"Images / figures with playful and creative examples\")[\n",
" 1\n",
"]"
"retriever.invoke(\"Images / figures with playful and creative examples\")[1]"
]
},
{

View File

@@ -191,15 +191,15 @@
"source": [
"## Multi-vector retriever\n",
"\n",
"Use [multi-vector-retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/multi_vector#summary).\n",
"Use [multi-vector-retriever](/docs/modules/data_connection/retrievers/multi_vector#summary).\n",
"\n",
"Summaries are used to retrieve raw tables and / or raw chunks of text.\n",
"\n",
"### Text and Table summaries\n",
"\n",
"Here, we use ollama.ai to run LLaMA2 locally. \n",
"Here, we use Ollama to run LLaMA2 locally. \n",
"\n",
"See details on installation [here](https://python.langchain.com/docs/guides/local_llms)."
"See details on installation [here](/docs/guides/development/local_llms)."
]
},
{
@@ -501,9 +501,7 @@
}
],
"source": [
"retriever.get_relevant_documents(\"Images / figures with playful and creative examples\")[\n",
" 0\n",
"]"
"retriever.invoke(\"Images / figures with playful and creative examples\")[0]"
]
},
{

View File

@@ -342,7 +342,7 @@
"# Testing on retrieval\n",
"query = \"What percentage of CPI is dedicated to Housing, and how does it compare to the combined percentage of Medical Care, Apparel, and Other Goods and Services?\"\n",
"suffix_for_images = \" Include any pie charts, graphs, or tables.\"\n",
"docs = retriever_multi_vector_img.get_relevant_documents(query + suffix_for_images)"
"docs = retriever_multi_vector_img.invoke(query + suffix_for_images)"
]
},
{
@@ -532,8 +532,8 @@
"def is_image_data(b64data):\n",
" \"\"\"Check if the base64 data is an image by looking at the start of the data.\"\"\"\n",
" image_signatures = {\n",
" b\"\\xFF\\xD8\\xFF\": \"jpg\",\n",
" b\"\\x89\\x50\\x4E\\x47\\x0D\\x0A\\x1A\\x0A\": \"png\",\n",
" b\"\\xff\\xd8\\xff\": \"jpg\",\n",
" b\"\\x89\\x50\\x4e\\x47\\x0d\\x0a\\x1a\\x0a\": \"png\",\n",
" b\"\\x47\\x49\\x46\\x38\": \"gif\",\n",
" b\"\\x52\\x49\\x46\\x46\": \"webp\",\n",
" }\n",

File diff suppressed because one or more lines are too long

View File

@@ -40,7 +40,9 @@
"import nest_asyncio\n",
"import pandas as pd\n",
"from langchain.docstore.document import Document\n",
"from langchain_community.agent_toolkits.pandas.base import create_pandas_dataframe_agent\n",
"from langchain_experimental.agents.agent_toolkits.pandas.base import (\n",
" create_pandas_dataframe_agent,\n",
")\n",
"from langchain_experimental.autonomous_agents import AutoGPT\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
@@ -57,7 +59,7 @@
},
"outputs": [],
"source": [
"llm = ChatOpenAI(model_name=\"gpt-4\", temperature=1.0)"
"llm = ChatOpenAI(model=\"gpt-4\", temperature=1.0)"
]
},
{

File diff suppressed because one or more lines are too long

View File

@@ -90,7 +90,7 @@
" ) -> AIMessage:\n",
" messages = self.update_messages(input_message)\n",
"\n",
" output_message = self.model(messages)\n",
" output_message = self.model.invoke(messages)\n",
" self.update_messages(output_message)\n",
"\n",
" return output_message"

View File

@@ -933,7 +933,7 @@
"**Answer**: The LangChain class includes various types of retrievers such as:\n",
"\n",
"- ArxivRetriever\n",
"- AzureCognitiveSearchRetriever\n",
"- AzureAISearchRetriever\n",
"- BM25Retriever\n",
"- ChaindeskRetriever\n",
"- ChatGPTPluginRetriever\n",
@@ -993,7 +993,7 @@
{
"data": {
"text/plain": [
"{'question': 'LangChain possesses a variety of retrievers including:\\n\\n1. ArxivRetriever\\n2. AzureCognitiveSearchRetriever\\n3. BM25Retriever\\n4. ChaindeskRetriever\\n5. ChatGPTPluginRetriever\\n6. ContextualCompressionRetriever\\n7. DocArrayRetriever\\n8. ElasticSearchBM25Retriever\\n9. EnsembleRetriever\\n10. GoogleVertexAISearchRetriever\\n11. AmazonKendraRetriever\\n12. KNNRetriever\\n13. LlamaIndexGraphRetriever\\n14. LlamaIndexRetriever\\n15. MergerRetriever\\n16. MetalRetriever\\n17. MilvusRetriever\\n18. MultiQueryRetriever\\n19. ParentDocumentRetriever\\n20. PineconeHybridSearchRetriever\\n21. PubMedRetriever\\n22. RePhraseQueryRetriever\\n23. RemoteLangChainRetriever\\n24. SelfQueryRetriever\\n25. SVMRetriever\\n26. TFIDFRetriever\\n27. TimeWeightedVectorStoreRetriever\\n28. VespaRetriever\\n29. WeaviateHybridSearchRetriever\\n30. WebResearchRetriever\\n31. WikipediaRetriever\\n32. ZepRetriever\\n33. ZillizRetriever\\n\\nIt also includes self query translators like:\\n\\n1. ChromaTranslator\\n2. DeepLakeTranslator\\n3. MyScaleTranslator\\n4. PineconeTranslator\\n5. QdrantTranslator\\n6. WeaviateTranslator\\n\\nAnd remote retrievers like:\\n\\n1. RemoteLangChainRetriever'}"
"{'question': 'LangChain possesses a variety of retrievers including:\\n\\n1. ArxivRetriever\\n2. AzureAISearchRetriever\\n3. BM25Retriever\\n4. ChaindeskRetriever\\n5. ChatGPTPluginRetriever\\n6. ContextualCompressionRetriever\\n7. DocArrayRetriever\\n8. ElasticSearchBM25Retriever\\n9. EnsembleRetriever\\n10. GoogleVertexAISearchRetriever\\n11. AmazonKendraRetriever\\n12. KNNRetriever\\n13. LlamaIndexGraphRetriever\\n14. LlamaIndexRetriever\\n15. MergerRetriever\\n16. MetalRetriever\\n17. MilvusRetriever\\n18. MultiQueryRetriever\\n19. ParentDocumentRetriever\\n20. PineconeHybridSearchRetriever\\n21. PubMedRetriever\\n22. RePhraseQueryRetriever\\n23. RemoteLangChainRetriever\\n24. SelfQueryRetriever\\n25. SVMRetriever\\n26. TFIDFRetriever\\n27. TimeWeightedVectorStoreRetriever\\n28. VespaRetriever\\n29. WeaviateHybridSearchRetriever\\n30. WebResearchRetriever\\n31. WikipediaRetriever\\n32. ZepRetriever\\n33. ZillizRetriever\\n\\nIt also includes self query translators like:\\n\\n1. ChromaTranslator\\n2. DeepLakeTranslator\\n3. MyScaleTranslator\\n4. PineconeTranslator\\n5. QdrantTranslator\\n6. WeaviateTranslator\\n\\nAnd remote retrievers like:\\n\\n1. RemoteLangChainRetriever'}"
]
},
"execution_count": 31,
@@ -1117,7 +1117,7 @@
"The LangChain class includes various types of retrievers such as:\n",
"\n",
"- ArxivRetriever\n",
"- AzureCognitiveSearchRetriever\n",
"- AzureAISearchRetriever\n",
"- BM25Retriever\n",
"- ChaindeskRetriever\n",
"- ChatGPTPluginRetriever\n",

557
cookbook/cql_agent.ipynb Normal file
View File

@@ -0,0 +1,557 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Environment"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Python Modules"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Install the following Python modules:\n",
"\n",
"```bash\n",
"pip install ipykernel python-dotenv cassio pandas langchain_openai langchain langchain-community langchainhub langchain_experimental openai-multi-tool-use-parallel-patch\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load the `.env` File"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Connection is via `cassio` using `auto=True` parameter, and the notebook uses OpenAI. You should create a `.env` file accordingly.\n",
"\n",
"For Casssandra, set:\n",
"```bash\n",
"CASSANDRA_CONTACT_POINTS\n",
"CASSANDRA_USERNAME\n",
"CASSANDRA_PASSWORD\n",
"CASSANDRA_KEYSPACE\n",
"```\n",
"\n",
"For Astra, set:\n",
"```bash\n",
"ASTRA_DB_APPLICATION_TOKEN\n",
"ASTRA_DB_DATABASE_ID\n",
"ASTRA_DB_KEYSPACE\n",
"```\n",
"\n",
"For example:\n",
"\n",
"```bash\n",
"# Connection to Astra:\n",
"ASTRA_DB_DATABASE_ID=a1b2c3d4-...\n",
"ASTRA_DB_APPLICATION_TOKEN=AstraCS:...\n",
"ASTRA_DB_KEYSPACE=notebooks\n",
"\n",
"# Also set \n",
"OPENAI_API_KEY=sk-....\n",
"```\n",
"\n",
"(You may also modify the below code to directly connect with `cassio`.)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv(override=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Connect to Cassandra"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"import cassio\n",
"\n",
"cassio.init(auto=True)\n",
"session = cassio.config.resolve_session()\n",
"if not session:\n",
" raise Exception(\n",
" \"Check environment configuration or manually configure cassio connection parameters\"\n",
" )\n",
"\n",
"keyspace = os.environ.get(\n",
" \"ASTRA_DB_KEYSPACE\", os.environ.get(\"CASSANDRA_KEYSPACE\", None)\n",
")\n",
"if not keyspace:\n",
" raise ValueError(\"a KEYSPACE environment variable must be set\")\n",
"\n",
"session.set_keyspace(keyspace)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup Database"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This needs to be done one time only!"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Download Data"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The dataset used is from Kaggle, the [Environmental Sensor Telemetry Data](https://www.kaggle.com/datasets/garystafford/environmental-sensor-data-132k?select=iot_telemetry_data.csv). The next cell will download and unzip the data into a Pandas dataframe. The following cell is instructions to download manually. \n",
"\n",
"The net result of this section is you should have a Pandas dataframe variable `df`."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Download Automatically"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from io import BytesIO\n",
"from zipfile import ZipFile\n",
"\n",
"import pandas as pd\n",
"import requests\n",
"\n",
"datasetURL = \"https://storage.googleapis.com/kaggle-data-sets/788816/1355729/bundle/archive.zip?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=gcp-kaggle-com%40kaggle-161607.iam.gserviceaccount.com%2F20240404%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20240404T115828Z&X-Goog-Expires=259200&X-Goog-SignedHeaders=host&X-Goog-Signature=2849f003b100eb9dcda8dd8535990f51244292f67e4f5fad36f14aa67f2d4297672d8fe6ff5a39f03a29cda051e33e95d36daab5892b8874dcd5a60228df0361fa26bae491dd4371f02dd20306b583a44ba85a4474376188b1f84765147d3b4f05c57345e5de883c2c29653cce1f3755cd8e645c5e952f4fb1c8a735b22f0c811f97f7bce8d0235d0d3731ca8ab4629ff381f3bae9e35fc1b181c1e69a9c7913a5e42d9d52d53e5f716467205af9c8a3cc6746fc5352e8fbc47cd7d18543626bd67996d18c2045c1e475fc136df83df352fa747f1a3bb73e6ba3985840792ec1de407c15836640ec96db111b173bf16115037d53fdfbfd8ac44145d7f9a546aa\"\n",
"\n",
"response = requests.get(datasetURL)\n",
"if response.status_code == 200:\n",
" zip_file = ZipFile(BytesIO(response.content))\n",
" csv_file_name = zip_file.namelist()[0]\n",
"else:\n",
" print(\"Failed to download the file\")\n",
"\n",
"with zip_file.open(csv_file_name) as csv_file:\n",
" df = pd.read_csv(csv_file)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Download Manually"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"You can download the `.zip` file and unpack the `.csv` contained within. Comment in the next line, and adjust the path to this `.csv` file appropriately."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# df = pd.read_csv(\"/path/to/iot_telemetry_data.csv\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Data into Cassandra"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This section assumes the existence of a dataframe `df`, the following cell validates its structure. The Download section above creates this object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"assert df is not None, \"Dataframe 'df' must be set\"\n",
"expected_columns = [\n",
" \"ts\",\n",
" \"device\",\n",
" \"co\",\n",
" \"humidity\",\n",
" \"light\",\n",
" \"lpg\",\n",
" \"motion\",\n",
" \"smoke\",\n",
" \"temp\",\n",
"]\n",
"assert all(\n",
" [column in df.columns for column in expected_columns]\n",
"), \"DataFrame does not have the expected columns\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Create and load tables:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from datetime import UTC, datetime\n",
"\n",
"from cassandra.query import BatchStatement\n",
"\n",
"# Create sensors table\n",
"table_query = \"\"\"\n",
"CREATE TABLE IF NOT EXISTS iot_sensors (\n",
" device text,\n",
" conditions text,\n",
" room text,\n",
" PRIMARY KEY (device)\n",
")\n",
"WITH COMMENT = 'Environmental IoT room sensor metadata.';\n",
"\"\"\"\n",
"session.execute(table_query)\n",
"\n",
"pstmt = session.prepare(\n",
" \"\"\"\n",
"INSERT INTO iot_sensors (device, conditions, room)\n",
"VALUES (?, ?, ?)\n",
"\"\"\"\n",
")\n",
"\n",
"devices = [\n",
" (\"00:0f:00:70:91:0a\", \"stable conditions, cooler and more humid\", \"room 1\"),\n",
" (\"1c:bf:ce:15:ec:4d\", \"highly variable temperature and humidity\", \"room 2\"),\n",
" (\"b8:27:eb:bf:9d:51\", \"stable conditions, warmer and dryer\", \"room 3\"),\n",
"]\n",
"\n",
"for device, conditions, room in devices:\n",
" session.execute(pstmt, (device, conditions, room))\n",
"\n",
"print(\"Sensors inserted successfully.\")\n",
"\n",
"# Create data table\n",
"table_query = \"\"\"\n",
"CREATE TABLE IF NOT EXISTS iot_data (\n",
" day text,\n",
" device text,\n",
" ts timestamp,\n",
" co double,\n",
" humidity double,\n",
" light boolean,\n",
" lpg double,\n",
" motion boolean,\n",
" smoke double,\n",
" temp double,\n",
" PRIMARY KEY ((day, device), ts)\n",
")\n",
"WITH COMMENT = 'Data from environmental IoT room sensors. Columns include device identifier, timestamp (ts) of the data collection, carbon monoxide level (co), relative humidity, light presence, LPG concentration, motion detection, smoke concentration, and temperature (temp). Data is partitioned by day and device.';\n",
"\"\"\"\n",
"session.execute(table_query)\n",
"\n",
"pstmt = session.prepare(\n",
" \"\"\"\n",
"INSERT INTO iot_data (day, device, ts, co, humidity, light, lpg, motion, smoke, temp)\n",
"VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)\n",
"\"\"\"\n",
")\n",
"\n",
"\n",
"def insert_data_batch(name, group):\n",
" batch = BatchStatement()\n",
" day, device = name\n",
" print(f\"Inserting batch for day: {day}, device: {device}\")\n",
"\n",
" for _, row in group.iterrows():\n",
" timestamp = datetime.fromtimestamp(row[\"ts\"], UTC)\n",
" batch.add(\n",
" pstmt,\n",
" (\n",
" day,\n",
" row[\"device\"],\n",
" timestamp,\n",
" row[\"co\"],\n",
" row[\"humidity\"],\n",
" row[\"light\"],\n",
" row[\"lpg\"],\n",
" row[\"motion\"],\n",
" row[\"smoke\"],\n",
" row[\"temp\"],\n",
" ),\n",
" )\n",
"\n",
" session.execute(batch)\n",
"\n",
"\n",
"# Convert columns to appropriate types\n",
"df[\"light\"] = df[\"light\"] == \"true\"\n",
"df[\"motion\"] = df[\"motion\"] == \"true\"\n",
"df[\"ts\"] = df[\"ts\"].astype(float)\n",
"df[\"day\"] = df[\"ts\"].apply(\n",
" lambda x: datetime.fromtimestamp(x, UTC).strftime(\"%Y-%m-%d\")\n",
")\n",
"\n",
"grouped_df = df.groupby([\"day\", \"device\"])\n",
"\n",
"for name, group in grouped_df:\n",
" insert_data_batch(name, group)\n",
"\n",
"print(\"Data load complete\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(session.keyspace)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Load the Tools"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Python `import` statements for the demo:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, create_openai_tools_agent\n",
"from langchain_community.agent_toolkits.cassandra_database.toolkit import (\n",
" CassandraDatabaseToolkit,\n",
")\n",
"from langchain_community.tools.cassandra_database.prompt import QUERY_PATH_PROMPT\n",
"from langchain_community.tools.cassandra_database.tool import (\n",
" GetSchemaCassandraDatabaseTool,\n",
" GetTableDataCassandraDatabaseTool,\n",
" QueryCassandraDatabaseTool,\n",
")\n",
"from langchain_community.utilities.cassandra_database import CassandraDatabase\n",
"from langchain_openai import ChatOpenAI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `CassandraDatabase` object is loaded from `cassio`, though it does accept a `Session`-type parameter as an alternative."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create a CassandraDatabase instance\n",
"db = CassandraDatabase(include_tables=[\"iot_sensors\", \"iot_data\"])\n",
"\n",
"# Create the Cassandra Database tools\n",
"query_tool = QueryCassandraDatabaseTool(db=db)\n",
"schema_tool = GetSchemaCassandraDatabaseTool(db=db)\n",
"select_data_tool = GetTableDataCassandraDatabaseTool(db=db)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The tools can be invoked directly:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Test the tools\n",
"print(\"Executing a CQL query:\")\n",
"query = \"SELECT * FROM iot_sensors LIMIT 5;\"\n",
"result = query_tool.run({\"query\": query})\n",
"print(result)\n",
"\n",
"print(\"\\nGetting the schema for a keyspace:\")\n",
"schema = schema_tool.run({\"keyspace\": keyspace})\n",
"print(schema)\n",
"\n",
"print(\"\\nGetting data from a table:\")\n",
"table = \"iot_data\"\n",
"predicate = \"day = '2020-07-14' and device = 'b8:27:eb:bf:9d:51'\"\n",
"data = select_data_tool.run(\n",
" {\"keyspace\": keyspace, \"table\": table, \"predicate\": predicate, \"limit\": 5}\n",
")\n",
"print(data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Agent Configuration"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import Tool\n",
"from langchain_experimental.utilities import PythonREPL\n",
"\n",
"python_repl = PythonREPL()\n",
"\n",
"repl_tool = Tool(\n",
" name=\"python_repl\",\n",
" description=\"A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.\",\n",
" func=python_repl.run,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"\n",
"llm = ChatOpenAI(temperature=0, model=\"gpt-4-1106-preview\")\n",
"toolkit = CassandraDatabaseToolkit(db=db)\n",
"\n",
"# context = toolkit.get_context()\n",
"# tools = toolkit.get_tools()\n",
"tools = [schema_tool, select_data_tool, repl_tool]\n",
"\n",
"input = (\n",
" QUERY_PATH_PROMPT\n",
" + f\"\"\"\n",
"\n",
"Here is your task: In the {keyspace} keyspace, find the total number of times the temperature of each device has exceeded 23 degrees on July 14, 2020.\n",
" Create a summary report including the name of the room. Use Pandas if helpful.\n",
"\"\"\"\n",
")\n",
"\n",
"prompt = hub.pull(\"hwchase17/openai-tools-agent\")\n",
"\n",
"# messages = [\n",
"# HumanMessagePromptTemplate.from_template(input),\n",
"# AIMessage(content=QUERY_PATH_PROMPT),\n",
"# MessagesPlaceholder(variable_name=\"agent_scratchpad\"),\n",
"# ]\n",
"\n",
"# prompt = ChatPromptTemplate.from_messages(messages)\n",
"# print(prompt)\n",
"\n",
"# Choose the LLM that will drive the agent\n",
"# Only certain models support this\n",
"llm = ChatOpenAI(model=\"gpt-3.5-turbo-1106\", temperature=0)\n",
"\n",
"# Construct the OpenAI Tools agent\n",
"agent = create_openai_tools_agent(llm, tools, prompt)\n",
"\n",
"print(\"Available tools:\")\n",
"for tool in tools:\n",
" print(\"\\t\" + tool.name + \" - \" + tool.description + \" - \" + str(tool))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)\n",
"\n",
"response = agent_executor.invoke({\"input\": input})\n",
"\n",
"print(response[\"output\"])"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -169,7 +169,7 @@
"\n",
"def get_tools(query):\n",
" # Get documents, which contain the Plugins to use\n",
" docs = retriever.get_relevant_documents(query)\n",
" docs = retriever.invoke(query)\n",
" # Get the toolkits, one for each plugin\n",
" tool_kits = [toolkits_dict[d.metadata[\"plugin_name\"]] for d in docs]\n",
" # Get the tools: a separate NLAChain for each endpoint\n",

View File

@@ -193,7 +193,7 @@
"\n",
"def get_tools(query):\n",
" # Get documents, which contain the Plugins to use\n",
" docs = retriever.get_relevant_documents(query)\n",
" docs = retriever.invoke(query)\n",
" # Get the toolkits, one for each plugin\n",
" tool_kits = [toolkits_dict[d.metadata[\"plugin_name\"]] for d in docs]\n",
" # Get the tools: a separate NLAChain for each endpoint\n",

View File

@@ -142,7 +142,7 @@
"\n",
"\n",
"def get_tools(query):\n",
" docs = retriever.get_relevant_documents(query)\n",
" docs = retriever.invoke(query)\n",
" return [ALL_TOOLS[d.metadata[\"index\"]] for d in docs]"
]
},

View File

@@ -84,7 +84,7 @@
"metadata": {},
"outputs": [],
"source": [
"llm = ChatOpenAI(model_name=\"gpt-4\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-4\", temperature=0)\n",
"chain = ElasticsearchDatabaseChain.from_llm(llm=llm, database=db, verbose=True)"
]
},

View File

@@ -100,7 +100,7 @@
}
],
"source": [
"agent.run(\"whats 2 + 2\")"
"agent.invoke(\"whats 2 + 2\")"
]
},
{

View File

@@ -362,7 +362,7 @@
],
"source": [
"llm = OpenAI()\n",
"llm(query)"
"llm.invoke(query)"
]
},
{

View File

@@ -108,7 +108,7 @@
" return obs_message\n",
"\n",
" def _act(self):\n",
" act_message = self.model(self.message_history)\n",
" act_message = self.model.invoke(self.message_history)\n",
" self.message_history.append(act_message)\n",
" action = int(self.action_parser.parse(act_message.content)[\"action\"])\n",
" return action\n",

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -45,7 +45,7 @@
}
],
"source": [
"llm_symbolic_math.run(\"What is the derivative of sin(x)*exp(x) with respect to x?\")"
"llm_symbolic_math.invoke(\"What is the derivative of sin(x)*exp(x) with respect to x?\")"
]
},
{
@@ -65,7 +65,7 @@
}
],
"source": [
"llm_symbolic_math.run(\n",
"llm_symbolic_math.invoke(\n",
" \"What is the integral of exp(x)*sin(x) + exp(x)*cos(x) with respect to x?\"\n",
")"
]
@@ -94,7 +94,7 @@
}
],
"source": [
"llm_symbolic_math.run('Solve the differential equation y\" - y = e^t')"
"llm_symbolic_math.invoke('Solve the differential equation y\" - y = e^t')"
]
},
{
@@ -114,7 +114,7 @@
}
],
"source": [
"llm_symbolic_math.run(\"What are the solutions to this equation y^3 + 1/3y?\")"
"llm_symbolic_math.invoke(\"What are the solutions to this equation y^3 + 1/3y?\")"
]
},
{
@@ -134,7 +134,7 @@
}
],
"source": [
"llm_symbolic_math.run(\"x = y + 5, y = z - 3, z = x * y. Solve for x, y, z\")"
"llm_symbolic_math.invoke(\"x = y + 5, y = z - 3, z = x * y. Solve for x, y, z\")"
]
}
],

View File

@@ -0,0 +1,818 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "70b333e6",
"metadata": {},
"source": [
"[![View Article](https://img.shields.io/badge/View%20Article-blue)](https://www.mongodb.com/developer/products/atlas/advanced-rag-langchain-mongodb/)\n"
]
},
{
"cell_type": "markdown",
"id": "d84a72ea",
"metadata": {},
"source": [
"# Adding Semantic Caching and Memory to your RAG Application using MongoDB and LangChain\n",
"\n",
"In this notebook, we will see how to use the new MongoDBCache and MongoDBChatMessageHistory in your RAG application.\n"
]
},
{
"cell_type": "markdown",
"id": "65527202",
"metadata": {},
"source": [
"## Step 1: Install required libraries\n",
"\n",
"- **datasets**: Python library to get access to datasets available on Hugging Face Hub\n",
"\n",
"- **langchain**: Python toolkit for LangChain\n",
"\n",
"- **langchain-mongodb**: Python package to use MongoDB as a vector store, semantic cache, chat history store etc. in LangChain\n",
"\n",
"- **langchain-openai**: Python package to use OpenAI models with LangChain\n",
"\n",
"- **pymongo**: Python toolkit for MongoDB\n",
"\n",
"- **pandas**: Python library for data analysis, exploration, and manipulation"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cbc22fa4",
"metadata": {},
"outputs": [],
"source": [
"! pip install -qU datasets langchain langchain-mongodb langchain-openai pymongo pandas"
]
},
{
"cell_type": "markdown",
"id": "39c41e87",
"metadata": {},
"source": [
"## Step 2: Setup pre-requisites\n",
"\n",
"* Set the MongoDB connection string. Follow the steps [here](https://www.mongodb.com/docs/manual/reference/connection-string/) to get the connection string from the Atlas UI.\n",
"\n",
"* Set the OpenAI API key. Steps to obtain an API key as [here](https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "b56412ae",
"metadata": {},
"outputs": [],
"source": [
"import getpass"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "16a20d7a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Enter your MongoDB connection string:········\n"
]
}
],
"source": [
"MONGODB_URI = getpass.getpass(\"Enter your MongoDB connection string:\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "978682d4",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Enter your OpenAI API key:········\n"
]
}
],
"source": [
"OPENAI_API_KEY = getpass.getpass(\"Enter your OpenAI API key:\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "606081c5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"········\n"
]
}
],
"source": [
"# Optional-- If you want to enable Langsmith -- good for debugging\n",
"import os\n",
"\n",
"os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
"os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
]
},
{
"cell_type": "markdown",
"id": "f6b8302c",
"metadata": {},
"source": [
"## Step 3: Download the dataset\n",
"\n",
"We will be using MongoDB's [embedded_movies](https://huggingface.co/datasets/MongoDB/embedded_movies) dataset"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1a3433a6",
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"from datasets import load_dataset"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "aee5311b",
"metadata": {},
"outputs": [],
"source": [
"# Ensure you have an HF_TOKEN in your development enviornment:\n",
"# access tokens can be created or copied from the Hugging Face platform (https://huggingface.co/docs/hub/en/security-tokens)\n",
"\n",
"# Load MongoDB's embedded_movies dataset from Hugging Face\n",
"# https://huggingface.co/datasets/MongoDB/airbnb_embeddings\n",
"\n",
"data = load_dataset(\"MongoDB/embedded_movies\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "1d630a26",
"metadata": {},
"outputs": [],
"source": [
"df = pd.DataFrame(data[\"train\"])"
]
},
{
"cell_type": "markdown",
"id": "a1f94f43",
"metadata": {},
"source": [
"## Step 4: Data analysis\n",
"\n",
"Make sure length of the dataset is what we expect, drop Nones etc."
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b276df71",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>fullplot</th>\n",
" <th>type</th>\n",
" <th>plot_embedding</th>\n",
" <th>num_mflix_comments</th>\n",
" <th>runtime</th>\n",
" <th>writers</th>\n",
" <th>imdb</th>\n",
" <th>countries</th>\n",
" <th>rated</th>\n",
" <th>plot</th>\n",
" <th>title</th>\n",
" <th>languages</th>\n",
" <th>metacritic</th>\n",
" <th>directors</th>\n",
" <th>awards</th>\n",
" <th>genres</th>\n",
" <th>poster</th>\n",
" <th>cast</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>Young Pauline is left a lot of money when her ...</td>\n",
" <td>movie</td>\n",
" <td>[0.00072939653, -0.026834568, 0.013515796, -0....</td>\n",
" <td>0</td>\n",
" <td>199.0</td>\n",
" <td>[Charles W. Goddard (screenplay), Basil Dickey...</td>\n",
" <td>{'id': 4465, 'rating': 7.6, 'votes': 744}</td>\n",
" <td>[USA]</td>\n",
" <td>None</td>\n",
" <td>Young Pauline is left a lot of money when her ...</td>\n",
" <td>The Perils of Pauline</td>\n",
" <td>[English]</td>\n",
" <td>NaN</td>\n",
" <td>[Louis J. Gasnier, Donald MacKenzie]</td>\n",
" <td>{'nominations': 0, 'text': '1 win.', 'wins': 1}</td>\n",
" <td>[Action]</td>\n",
" <td>https://m.media-amazon.com/images/M/MV5BMzgxOD...</td>\n",
" <td>[Pearl White, Crane Wilbur, Paul Panzer, Edwar...</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" fullplot type \\\n",
"0 Young Pauline is left a lot of money when her ... movie \n",
"\n",
" plot_embedding num_mflix_comments \\\n",
"0 [0.00072939653, -0.026834568, 0.013515796, -0.... 0 \n",
"\n",
" runtime writers \\\n",
"0 199.0 [Charles W. Goddard (screenplay), Basil Dickey... \n",
"\n",
" imdb countries rated \\\n",
"0 {'id': 4465, 'rating': 7.6, 'votes': 744} [USA] None \n",
"\n",
" plot title \\\n",
"0 Young Pauline is left a lot of money when her ... The Perils of Pauline \n",
"\n",
" languages metacritic directors \\\n",
"0 [English] NaN [Louis J. Gasnier, Donald MacKenzie] \n",
"\n",
" awards genres \\\n",
"0 {'nominations': 0, 'text': '1 win.', 'wins': 1} [Action] \n",
"\n",
" poster \\\n",
"0 https://m.media-amazon.com/images/M/MV5BMzgxOD... \n",
"\n",
" cast \n",
"0 [Pearl White, Crane Wilbur, Paul Panzer, Edwar... "
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Previewing the contents of the data\n",
"df.head(1)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "22ab375d",
"metadata": {},
"outputs": [],
"source": [
"# Only keep records where the fullplot field is not null\n",
"df = df[df[\"fullplot\"].notna()]"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "fceed99a",
"metadata": {},
"outputs": [],
"source": [
"# Renaming the embedding field to \"embedding\" -- required by LangChain\n",
"df.rename(columns={\"plot_embedding\": \"embedding\"}, inplace=True)"
]
},
{
"cell_type": "markdown",
"id": "aedec13a",
"metadata": {},
"source": [
"## Step 5: Create a simple RAG chain using MongoDB as the vector store"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "11d292f3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_mongodb import MongoDBAtlasVectorSearch\n",
"from pymongo import MongoClient\n",
"\n",
"# Initialize MongoDB python client\n",
"client = MongoClient(MONGODB_URI, appname=\"devrel.content.python\")\n",
"\n",
"DB_NAME = \"langchain_chatbot\"\n",
"COLLECTION_NAME = \"data\"\n",
"ATLAS_VECTOR_SEARCH_INDEX_NAME = \"vector_index\"\n",
"collection = client[DB_NAME][COLLECTION_NAME]"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "d8292d53",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"DeleteResult({'n': 1000, 'electionId': ObjectId('7fffffff00000000000000f6'), 'opTime': {'ts': Timestamp(1710523288, 1033), 't': 246}, 'ok': 1.0, '$clusterTime': {'clusterTime': Timestamp(1710523288, 1042), 'signature': {'hash': b\"i\\xa8\\xe9'\\x1ed\\xf2u\\xf3L\\xff\\xb1\\xf5\\xbfA\\x90\\xabJ\\x12\\x83\", 'keyId': 7299545392000008318}}, 'operationTime': Timestamp(1710523288, 1033)}, acknowledged=True)"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Delete any existing records in the collection\n",
"collection.delete_many({})"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "36c68914",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Data ingestion into MongoDB completed\n"
]
}
],
"source": [
"# Data Ingestion\n",
"records = df.to_dict(\"records\")\n",
"collection.insert_many(records)\n",
"\n",
"print(\"Data ingestion into MongoDB completed\")"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "cbfca0b8",
"metadata": {},
"outputs": [],
"source": [
"from langchain_openai import OpenAIEmbeddings\n",
"\n",
"# Using the text-embedding-ada-002 since that's what was used to create embeddings in the movies dataset\n",
"embeddings = OpenAIEmbeddings(\n",
" openai_api_key=OPENAI_API_KEY, model=\"text-embedding-ada-002\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"id": "798e176c",
"metadata": {},
"outputs": [],
"source": [
"# Vector Store Creation\n",
"vector_store = MongoDBAtlasVectorSearch.from_connection_string(\n",
" connection_string=MONGODB_URI,\n",
" namespace=DB_NAME + \".\" + COLLECTION_NAME,\n",
" embedding=embeddings,\n",
" index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,\n",
" text_key=\"fullplot\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 49,
"id": "c71cd087",
"metadata": {},
"outputs": [],
"source": [
"# Using the MongoDB vector store as a retriever in a RAG chain\n",
"retriever = vector_store.as_retriever(search_type=\"similarity\", search_kwargs={\"k\": 5})"
]
},
{
"cell_type": "code",
"execution_count": 25,
"id": "b6588cd3",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"# Generate context using the retriever, and pass the user question through\n",
"retrieve = {\n",
" \"context\": retriever | (lambda docs: \"\\n\\n\".join([d.page_content for d in docs])),\n",
" \"question\": RunnablePassthrough(),\n",
"}\n",
"template = \"\"\"Answer the question based only on the following context: \\\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"# Defining the chat prompt\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"# Defining the model to be used for chat completion\n",
"model = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)\n",
"# Parse output as a string\n",
"parse_output = StrOutputParser()\n",
"\n",
"# Naive RAG chain\n",
"naive_rag_chain = retrieve | prompt | model | parse_output"
]
},
{
"cell_type": "code",
"execution_count": 26,
"id": "aaae21f5",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Once a Thief'"
]
},
"execution_count": 26,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"naive_rag_chain.invoke(\"What is the best movie to watch when sad?\")"
]
},
{
"cell_type": "markdown",
"id": "75f929ef",
"metadata": {},
"source": [
"## Step 6: Create a RAG chain with chat history"
]
},
{
"cell_type": "code",
"execution_count": 27,
"id": "94e7bd4a",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.prompts import MessagesPlaceholder\n",
"from langchain_core.runnables.history import RunnableWithMessageHistory\n",
"from langchain_mongodb.chat_message_histories import MongoDBChatMessageHistory"
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "5bb30860",
"metadata": {},
"outputs": [],
"source": [
"def get_session_history(session_id: str) -> MongoDBChatMessageHistory:\n",
" return MongoDBChatMessageHistory(\n",
" MONGODB_URI, session_id, database_name=DB_NAME, collection_name=\"history\"\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": 50,
"id": "f51d0f35",
"metadata": {},
"outputs": [],
"source": [
"# Given a follow-up question and history, create a standalone question\n",
"standalone_system_prompt = \"\"\"\n",
"Given a chat history and a follow-up question, rephrase the follow-up question to be a standalone question. \\\n",
"Do NOT answer the question, just reformulate it if needed, otherwise return it as is. \\\n",
"Only return the final standalone question. \\\n",
"\"\"\"\n",
"standalone_question_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", standalone_system_prompt),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")\n",
"\n",
"question_chain = standalone_question_prompt | model | parse_output"
]
},
{
"cell_type": "code",
"execution_count": 51,
"id": "f3ef3354",
"metadata": {},
"outputs": [],
"source": [
"# Generate context by passing output of the question_chain i.e. the standalone question to the retriever\n",
"retriever_chain = RunnablePassthrough.assign(\n",
" context=question_chain\n",
" | retriever\n",
" | (lambda docs: \"\\n\\n\".join([d.page_content for d in docs]))\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 55,
"id": "5afb7345",
"metadata": {},
"outputs": [],
"source": [
"# Create a prompt that includes the context, history and the follow-up question\n",
"rag_system_prompt = \"\"\"Answer the question based only on the following context: \\\n",
"{context}\n",
"\"\"\"\n",
"rag_prompt = ChatPromptTemplate.from_messages(\n",
" [\n",
" (\"system\", rag_system_prompt),\n",
" MessagesPlaceholder(variable_name=\"history\"),\n",
" (\"human\", \"{question}\"),\n",
" ]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 56,
"id": "f95f47d0",
"metadata": {},
"outputs": [],
"source": [
"# RAG chain\n",
"rag_chain = retriever_chain | rag_prompt | model | parse_output"
]
},
{
"cell_type": "code",
"execution_count": 57,
"id": "9618d395",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'The best movie to watch when feeling down could be \"Last Action Hero.\" It\\'s a fun and action-packed film that blends reality and fantasy, offering an escape from the real world and providing an entertaining distraction.'"
]
},
"execution_count": 57,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# RAG chain with history\n",
"with_message_history = RunnableWithMessageHistory(\n",
" rag_chain,\n",
" get_session_history,\n",
" input_messages_key=\"question\",\n",
" history_messages_key=\"history\",\n",
")\n",
"with_message_history.invoke(\n",
" {\"question\": \"What is the best movie to watch when sad?\"},\n",
" {\"configurable\": {\"session_id\": \"1\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 58,
"id": "6e3080d1",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'I apologize for the confusion. Another movie that might lift your spirits when you\\'re feeling sad is \"Smilla\\'s Sense of Snow.\" It\\'s a mystery thriller that could engage your mind and distract you from your sadness with its intriguing plot and suspenseful storyline.'"
]
},
"execution_count": 58,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with_message_history.invoke(\n",
" {\n",
" \"question\": \"Hmmm..I don't want to watch that one. Can you suggest something else?\"\n",
" },\n",
" {\"configurable\": {\"session_id\": \"1\"}},\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 59,
"id": "daea2953",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'For a lighter movie option, you might enjoy \"Cousins.\" It\\'s a comedy film set in Barcelona with action and humor, offering a fun and entertaining escape from reality. The storyline is engaging and filled with comedic moments that could help lift your spirits.'"
]
},
"execution_count": 59,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"with_message_history.invoke(\n",
" {\"question\": \"How about something more light?\"},\n",
" {\"configurable\": {\"session_id\": \"1\"}},\n",
")"
]
},
{
"cell_type": "markdown",
"id": "0de23a88",
"metadata": {},
"source": [
"## Step 7: Get faster responses using Semantic Cache\n",
"\n",
"**NOTE:** Semantic cache only caches the input to the LLM. When using it in retrieval chains, remember that documents retrieved can change between runs resulting in cache misses for semantically similar queries."
]
},
{
"cell_type": "code",
"execution_count": 61,
"id": "5d6b6741",
"metadata": {},
"outputs": [],
"source": [
"from langchain_core.globals import set_llm_cache\n",
"from langchain_mongodb.cache import MongoDBAtlasSemanticCache\n",
"\n",
"set_llm_cache(\n",
" MongoDBAtlasSemanticCache(\n",
" connection_string=MONGODB_URI,\n",
" embedding=embeddings,\n",
" collection_name=\"semantic_cache\",\n",
" database_name=DB_NAME,\n",
" index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,\n",
" wait_until_ready=True, # Optional, waits until the cache is ready to be used\n",
" )\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 62,
"id": "9825bc7b",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 87.8 ms, sys: 670 µs, total: 88.5 ms\n",
"Wall time: 1.24 s\n"
]
},
{
"data": {
"text/plain": [
"'Once a Thief'"
]
},
"execution_count": 62,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"naive_rag_chain.invoke(\"What is the best movie to watch when sad?\")"
]
},
{
"cell_type": "code",
"execution_count": 63,
"id": "a5e518cf",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 43.5 ms, sys: 4.16 ms, total: 47.7 ms\n",
"Wall time: 255 ms\n"
]
},
{
"data": {
"text/plain": [
"'Once a Thief'"
]
},
"execution_count": 63,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"naive_rag_chain.invoke(\"What is the best movie to watch when sad?\")"
]
},
{
"cell_type": "code",
"execution_count": 64,
"id": "3d3d3ad3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"CPU times: user 115 ms, sys: 171 µs, total: 115 ms\n",
"Wall time: 1.38 s\n"
]
},
{
"data": {
"text/plain": [
"'I would recommend watching \"Last Action Hero\" when sad, as it is a fun and action-packed film that can help lift your spirits.'"
]
},
"execution_count": 64,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"%%time\n",
"naive_rag_chain.invoke(\"Which movie do I watch when sad?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "conda_pytorch_p310",
"language": "python",
"name": "conda_pytorch_p310"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -435,7 +435,7 @@
" display(HTML(image_html))\n",
"\n",
"\n",
"docs = retriever.get_relevant_documents(\"Woman with children\", k=10)\n",
"docs = retriever.invoke(\"Woman with children\", k=10)\n",
"for doc in docs:\n",
" if is_base64(doc.page_content):\n",
" plt_img_base64(doc.page_content)\n",

File diff suppressed because one or more lines are too long

View File

@@ -74,7 +74,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",

View File

@@ -79,7 +79,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",
@@ -234,7 +234,7 @@
" termination_clause=self.termination_clause if self.stop else \"\",\n",
" )\n",
"\n",
" self.response = self.model(\n",
" self.response = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=response_prompt),\n",
@@ -263,7 +263,7 @@
" speaker_names=speaker_names,\n",
" )\n",
"\n",
" choice_string = self.model(\n",
" choice_string = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=choice_prompt),\n",
@@ -299,7 +299,7 @@
" ),\n",
" next_speaker=self.next_speaker,\n",
" )\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=next_prompt),\n",

View File

@@ -71,7 +71,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",
@@ -164,7 +164,7 @@
" message_history=\"\\n\".join(self.message_history),\n",
" recent_message=self.message_history[-1],\n",
" )\n",
" bid_string = self.model([SystemMessage(content=prompt)]).content\n",
" bid_string = self.model.invoke([SystemMessage(content=prompt)]).content\n",
" return bid_string"
]
},

View File

@@ -0,0 +1,878 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Oracle AI Vector Search with Document Processing\n",
"Oracle AI Vector Search is designed for Artificial Intelligence (AI) workloads that allows you to query data based on semantics, rather than keywords.\n",
"One of the biggest benefits of Oracle AI Vector Search is that semantic search on unstructured data can be combined with relational search on business data in one single system.\n",
"This is not only powerful but also significantly more effective because you don't need to add a specialized vector database, eliminating the pain of data fragmentation between multiple systems.\n",
"\n",
"In addition, your vectors can benefit from all of Oracle Databases most powerful features, like the following:\n",
"\n",
" * [Partitioning Support](https://www.oracle.com/database/technologies/partitioning.html)\n",
" * [Real Application Clusters scalability](https://www.oracle.com/database/real-application-clusters/)\n",
" * [Exadata smart scans](https://www.oracle.com/database/technologies/exadata/software/smartscan/)\n",
" * [Shard processing across geographically distributed databases](https://www.oracle.com/database/distributed-database/)\n",
" * [Transactions](https://docs.oracle.com/en/database/oracle/oracle-database/23/cncpt/transactions.html)\n",
" * [Parallel SQL](https://docs.oracle.com/en/database/oracle/oracle-database/21/vldbg/parallel-exec-intro.html#GUID-D28717E4-0F77-44F5-BB4E-234C31D4E4BA)\n",
" * [Disaster recovery](https://www.oracle.com/database/data-guard/)\n",
" * [Security](https://www.oracle.com/security/database-security/)\n",
" * [Oracle Machine Learning](https://www.oracle.com/artificial-intelligence/database-machine-learning/)\n",
" * [Oracle Graph Database](https://www.oracle.com/database/integrated-graph-database/)\n",
" * [Oracle Spatial and Graph](https://www.oracle.com/database/spatial/)\n",
" * [Oracle Blockchain](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_blockchain_table.html#GUID-B469E277-978E-4378-A8C1-26D3FF96C9A6)\n",
" * [JSON](https://docs.oracle.com/en/database/oracle/oracle-database/23/adjsn/json-in-oracle-database.html)\n",
"\n",
"This guide demonstrates how Oracle AI Vector Search can be used with Langchain to serve an end-to-end RAG pipeline. This guide goes through examples of:\n",
"\n",
" * Loading the documents from various sources using OracleDocLoader\n",
" * Summarizing them within/outside the database using OracleSummary\n",
" * Generating embeddings for them within/outside the database using OracleEmbeddings\n",
" * Chunking them according to different requirements using Advanced Oracle Capabilities from OracleTextSplitter\n",
" * Storing and Indexing them in a Vector Store and querying them for queries in OracleVS"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you are just starting with Oracle Database, consider exploring the [free Oracle 23 AI](https://www.oracle.com/database/free/#resources) which provides a great introduction to setting up your database environment. While working with the database, it is often advisable to avoid using the system user by default; instead, you can create your own user for enhanced security and customization. For detailed steps on user creation, refer to our [end-to-end guide](https://github.com/langchain-ai/langchain/blob/master/cookbook/oracleai_demo.ipynb) which also shows how to set up a user in Oracle. Additionally, understanding user privileges is crucial for managing database security effectively. You can learn more about this topic in the official [Oracle guide](https://docs.oracle.com/en/database/oracle/oracle-database/19/admqs/administering-user-accounts-and-security.html#GUID-36B21D72-1BBB-46C9-A0C9-F0D2A8591B8D) on administering user accounts and security."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Prerequisites\n",
"\n",
"Please install Oracle Python Client driver to use Langchain with Oracle AI Vector Search. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# pip install oracledb"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Demo User\n",
"First, create a demo user with all the required privileges. "
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connection successful!\n",
"User setup done!\n"
]
}
],
"source": [
"import sys\n",
"\n",
"import oracledb\n",
"\n",
"# please update with your username, password, hostname and service_name\n",
"# please make sure this user has sufficient privileges to perform all below\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
"\n",
"try:\n",
" conn = oracledb.connect(user=username, password=password, dsn=dsn)\n",
" print(\"Connection successful!\")\n",
"\n",
" cursor = conn.cursor()\n",
" cursor.execute(\n",
" \"\"\"\n",
" begin\n",
" -- drop user\n",
" begin\n",
" execute immediate 'drop user testuser cascade';\n",
" exception\n",
" when others then\n",
" dbms_output.put_line('Error setting up user.');\n",
" end;\n",
" execute immediate 'create user testuser identified by testuser';\n",
" execute immediate 'grant connect, unlimited tablespace, create credential, create procedure, create any index to testuser';\n",
" execute immediate 'create or replace directory DEMO_PY_DIR as ''/scratch/hroy/view_storage/hroy_devstorage/demo/orachain''';\n",
" execute immediate 'grant read, write on directory DEMO_PY_DIR to public';\n",
" execute immediate 'grant create mining model to testuser';\n",
"\n",
" -- network access\n",
" begin\n",
" DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE(\n",
" host => '*',\n",
" ace => xs$ace_type(privilege_list => xs$name_list('connect'),\n",
" principal_name => 'testuser',\n",
" principal_type => xs_acl.ptype_db));\n",
" end;\n",
" end;\n",
" \"\"\"\n",
" )\n",
" print(\"User setup done!\")\n",
" cursor.close()\n",
" conn.close()\n",
"except Exception as e:\n",
" print(\"User setup failed!\")\n",
" cursor.close()\n",
" conn.close()\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Process Documents using Oracle AI\n",
"Consider the following scenario: users possess documents stored either in an Oracle Database or a file system and intend to utilize this data with Oracle AI Vector Search powered by Langchain.\n",
"\n",
"To prepare the documents for analysis, a comprehensive preprocessing workflow is necessary. Initially, the documents must be retrieved, summarized (if required), and chunked as needed. Subsequent steps involve generating embeddings for these chunks and integrating them into the Oracle AI Vector Store. Users can then conduct semantic searches on this data.\n",
"\n",
"The Oracle AI Vector Search Langchain library encompasses a suite of document processing tools that facilitate document loading, chunking, summary generation, and embedding creation.\n",
"\n",
"In the sections that follow, we will detail the utilization of Oracle AI Langchain APIs to effectively implement each of these processes."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Connect to Demo User\n",
"The following sample code will show how to connect to Oracle Database. By default, python-oracledb runs in a Thin mode which connects directly to Oracle Database. This mode does not need Oracle Client libraries. However, some additional functionality is available when python-oracledb uses them. Python-oracledb is said to be in Thick mode when Oracle Client libraries are used. Both modes have comprehensive functionality supporting the Python Database API v2.0 Specification. See the following [guide](https://python-oracledb.readthedocs.io/en/latest/user_guide/appendix_a.html#featuresummary) that talks about features supported in each mode. You might want to switch to thick-mode if you are unable to use thin-mode."
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connection successful!\n"
]
}
],
"source": [
"import sys\n",
"\n",
"import oracledb\n",
"\n",
"# please update with your username, password, hostname and service_name\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
"\n",
"try:\n",
" conn = oracledb.connect(user=username, password=password, dsn=dsn)\n",
" print(\"Connection successful!\")\n",
"except Exception as e:\n",
" print(\"Connection failed!\")\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Populate a Demo Table\n",
"Create a demo table and insert some sample documents."
]
},
{
"cell_type": "code",
"execution_count": 46,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Table created and populated.\n"
]
}
],
"source": [
"try:\n",
" cursor = conn.cursor()\n",
"\n",
" drop_table_sql = \"\"\"drop table demo_tab\"\"\"\n",
" cursor.execute(drop_table_sql)\n",
"\n",
" create_table_sql = \"\"\"create table demo_tab (id number, data clob)\"\"\"\n",
" cursor.execute(create_table_sql)\n",
"\n",
" insert_row_sql = \"\"\"insert into demo_tab values (:1, :2)\"\"\"\n",
" rows_to_insert = [\n",
" (\n",
" 1,\n",
" \"If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.\",\n",
" ),\n",
" (\n",
" 2,\n",
" \"A tablespace can be online (accessible) or offline (not accessible) whenever the database is open.\\nA tablespace is usually online so that its data is available to users. The SYSTEM tablespace and temporary tablespaces cannot be taken offline.\",\n",
" ),\n",
" (\n",
" 3,\n",
" \"The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table.\\nSometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.\",\n",
" ),\n",
" ]\n",
" cursor.executemany(insert_row_sql, rows_to_insert)\n",
"\n",
" conn.commit()\n",
"\n",
" print(\"Table created and populated.\")\n",
" cursor.close()\n",
"except Exception as e:\n",
" print(\"Table creation failed.\")\n",
" cursor.close()\n",
" conn.close()\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With the inclusion of a demo user and a populated sample table, the remaining configuration involves setting up embedding and summary functionalities. Users are presented with multiple provider options, including local database solutions and third-party services such as Ocigenai, Hugging Face, and OpenAI. Should users opt for a third-party provider, they are required to establish credentials containing the necessary authentication details. Conversely, if selecting a database as the provider for embeddings, it is necessary to upload an ONNX model to the Oracle Database. No additional setup is required for summary functionalities when using the database option."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load ONNX Model\n",
"\n",
"Oracle accommodates a variety of embedding providers, enabling users to choose between proprietary database solutions and third-party services such as OCIGENAI and HuggingFace. This selection dictates the methodology for generating and managing embeddings.\n",
"\n",
"***Important*** : Should users opt for the database option, they must upload an ONNX model into the Oracle Database. Conversely, if a third-party provider is selected for embedding generation, uploading an ONNX model to Oracle Database is not required.\n",
"\n",
"A significant advantage of utilizing an ONNX model directly within Oracle is the enhanced security and performance it offers by eliminating the need to transmit data to external parties. Additionally, this method avoids the latency typically associated with network or REST API calls.\n",
"\n",
"Below is the example code to upload an ONNX model into Oracle Database:"
]
},
{
"cell_type": "code",
"execution_count": 47,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"ONNX model loaded.\n"
]
}
],
"source": [
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"\n",
"# please update with your related information\n",
"# make sure that you have onnx file in the system\n",
"onnx_dir = \"DEMO_PY_DIR\"\n",
"onnx_file = \"tinybert.onnx\"\n",
"model_name = \"demo_model\"\n",
"\n",
"try:\n",
" OracleEmbeddings.load_onnx_model(conn, onnx_dir, onnx_file, model_name)\n",
" print(\"ONNX model loaded.\")\n",
"except Exception as e:\n",
" print(\"ONNX model loading failed!\")\n",
" sys.exit(1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Create Credential\n",
"\n",
"When selecting third-party providers for generating embeddings, users are required to establish credentials to securely access the provider's endpoints.\n",
"\n",
"***Important:*** No credentials are necessary when opting for the 'database' provider to generate embeddings. However, should users decide to utilize a third-party provider, they must create credentials specific to the chosen provider.\n",
"\n",
"Below is an illustrative example:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"try:\n",
" cursor = conn.cursor()\n",
" cursor.execute(\n",
" \"\"\"\n",
" declare\n",
" jo json_object_t;\n",
" begin\n",
" -- HuggingFace\n",
" dbms_vector_chain.drop_credential(credential_name => 'HF_CRED');\n",
" jo := json_object_t();\n",
" jo.put('access_token', '<access_token>');\n",
" dbms_vector_chain.create_credential(\n",
" credential_name => 'HF_CRED',\n",
" params => json(jo.to_string));\n",
"\n",
" -- OCIGENAI\n",
" dbms_vector_chain.drop_credential(credential_name => 'OCI_CRED');\n",
" jo := json_object_t();\n",
" jo.put('user_ocid','<user_ocid>');\n",
" jo.put('tenancy_ocid','<tenancy_ocid>');\n",
" jo.put('compartment_ocid','<compartment_ocid>');\n",
" jo.put('private_key','<private_key>');\n",
" jo.put('fingerprint','<fingerprint>');\n",
" dbms_vector_chain.create_credential(\n",
" credential_name => 'OCI_CRED',\n",
" params => json(jo.to_string));\n",
" end;\n",
" \"\"\"\n",
" )\n",
" cursor.close()\n",
" print(\"Credentials created.\")\n",
"except Exception as ex:\n",
" cursor.close()\n",
" raise"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load Documents\n",
"Users have the flexibility to load documents from either the Oracle Database, a file system, or both, by appropriately configuring the loader parameters. For comprehensive details on these parameters, please consult the [Oracle AI Vector Search Guide](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-73397E89-92FB-48ED-94BB-1AD960C4EA1F).\n",
"\n",
"A significant advantage of utilizing OracleDocLoader is its capability to process over 150 distinct file formats, eliminating the need for multiple loaders for different document types. For a complete list of the supported formats, please refer to the [Oracle Text Supported Document Formats](https://docs.oracle.com/en/database/oracle/oracle-database/23/ccref/oracle-text-supported-document-formats.html).\n",
"\n",
"Below is a sample code snippet that demonstrates how to use OracleDocLoader"
]
},
{
"cell_type": "code",
"execution_count": 48,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of docs loaded: 3\n"
]
}
],
"source": [
"from langchain_community.document_loaders.oracleai import OracleDocLoader\n",
"from langchain_core.documents import Document\n",
"\n",
"# loading from Oracle Database table\n",
"# make sure you have the table with this specification\n",
"loader_params = {}\n",
"loader_params = {\n",
" \"owner\": \"testuser\",\n",
" \"tablename\": \"demo_tab\",\n",
" \"colname\": \"data\",\n",
"}\n",
"\n",
"\"\"\" load the docs \"\"\"\n",
"loader = OracleDocLoader(conn=conn, params=loader_params)\n",
"docs = loader.load()\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of docs loaded: {len(docs)}\")\n",
"# print(f\"Document-0: {docs[0].page_content}\") # content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Generate Summary\n",
"Now that the user loaded the documents, they may want to generate a summary for each document. The Oracle AI Vector Search Langchain library offers a suite of APIs designed for document summarization. It supports multiple summarization providers such as Database, OCIGENAI, HuggingFace, among others, allowing users to select the provider that best meets their needs. To utilize these capabilities, users must configure the summary parameters as specified. For detailed information on these parameters, please consult the [Oracle AI Vector Search Guide book](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-EC9DDB58-6A15-4B36-BA66-ECBA20D2CE57)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"***Note:*** The users may need to set proxy if they want to use some 3rd party summary generation providers other than Oracle's in-house and default provider: 'database'. If you don't have proxy, please remove the proxy parameter when you instantiate the OracleSummary."
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [],
"source": [
"# proxy to be used when we instantiate summary and embedder object\n",
"proxy = \"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following sample code will show how to generate summary:"
]
},
{
"cell_type": "code",
"execution_count": 49,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of Summaries: 3\n"
]
}
],
"source": [
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_core.documents import Document\n",
"\n",
"# using 'database' provider\n",
"summary_params = {\n",
" \"provider\": \"database\",\n",
" \"glevel\": \"S\",\n",
" \"numParagraphs\": 1,\n",
" \"language\": \"english\",\n",
"}\n",
"\n",
"# get the summary instance\n",
"# Remove proxy if not required\n",
"summ = OracleSummary(conn=conn, params=summary_params, proxy=proxy)\n",
"\n",
"list_summary = []\n",
"for doc in docs:\n",
" summary = summ.get_summary(doc.page_content)\n",
" list_summary.append(summary)\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of Summaries: {len(list_summary)}\")\n",
"# print(f\"Summary-0: {list_summary[0]}\") #content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Split Documents\n",
"The documents may vary in size, ranging from small to very large. Users often prefer to chunk their documents into smaller sections to facilitate the generation of embeddings. A wide array of customization options is available for this splitting process. For comprehensive details regarding these parameters, please consult the [Oracle AI Vector Search Guide](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-4E145629-7098-4C7C-804F-FC85D1F24240).\n",
"\n",
"Below is a sample code illustrating how to implement this:"
]
},
{
"cell_type": "code",
"execution_count": 50,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of Chunks: 3\n"
]
}
],
"source": [
"from langchain_community.document_loaders.oracleai import OracleTextSplitter\n",
"from langchain_core.documents import Document\n",
"\n",
"# split by default parameters\n",
"splitter_params = {\"normalize\": \"all\"}\n",
"\n",
"\"\"\" get the splitter instance \"\"\"\n",
"splitter = OracleTextSplitter(conn=conn, params=splitter_params)\n",
"\n",
"list_chunks = []\n",
"for doc in docs:\n",
" chunks = splitter.split_text(doc.page_content)\n",
" list_chunks.extend(chunks)\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of Chunks: {len(list_chunks)}\")\n",
"# print(f\"Chunk-0: {list_chunks[0]}\") # content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Generate Embeddings\n",
"Now that the documents are chunked as per requirements, the users may want to generate embeddings for these chunks. Oracle AI Vector Search provides multiple methods for generating embeddings, utilizing either locally hosted ONNX models or third-party APIs. For comprehensive instructions on configuring these alternatives, please refer to the [Oracle AI Vector Search Guide](https://docs.oracle.com/en/database/oracle/oracle-database/23/arpls/dbms_vector_chain1.html#GUID-C6439E94-4E86-4ECD-954E-4B73D53579DE)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"***Note:*** Currently, OracleEmbeddings processes each embedding generation request individually, without batching, by calling REST endpoints separately for each request. This method could potentially lead to exceeding the maximum request per minute quota set by some providers. However, we are actively working to enhance this process by implementing request batching, which will allow multiple embedding requests to be combined into fewer API calls, thereby optimizing our use of provider resources and adhering to their request limits. This update is expected to be rolled out soon, eliminating the current limitation.\n",
"\n",
"***Note:*** Users may need to configure a proxy to utilize third-party embedding generation providers, excluding the 'database' provider that utilizes an ONNX model."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"# proxy to be used when we instantiate summary and embedder object\n",
"proxy = \"\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The following sample code will show how to generate embeddings:"
]
},
{
"cell_type": "code",
"execution_count": 51,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Number of embeddings: 3\n"
]
}
],
"source": [
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_core.documents import Document\n",
"\n",
"# using ONNX model loaded to Oracle Database\n",
"embedder_params = {\"provider\": \"database\", \"model\": \"demo_model\"}\n",
"\n",
"# get the embedding instance\n",
"# Remove proxy if not required\n",
"embedder = OracleEmbeddings(conn=conn, params=embedder_params, proxy=proxy)\n",
"\n",
"embeddings = []\n",
"for doc in docs:\n",
" chunks = splitter.split_text(doc.page_content)\n",
" for chunk in chunks:\n",
" embed = embedder.embed_query(chunk)\n",
" embeddings.append(embed)\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of embeddings: {len(embeddings)}\")\n",
"# print(f\"Embedding-0: {embeddings[0]}\") # content"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Create Oracle AI Vector Store\n",
"Now that you know how to use Oracle AI Langchain library APIs individually to process the documents, let us show how to integrate with Oracle AI Vector Store to facilitate the semantic searches."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"First, let's import all the dependencies."
]
},
{
"cell_type": "code",
"execution_count": 52,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"\n",
"import oracledb\n",
"from langchain_community.document_loaders.oracleai import (\n",
" OracleDocLoader,\n",
" OracleTextSplitter,\n",
")\n",
"from langchain_community.embeddings.oracleai import OracleEmbeddings\n",
"from langchain_community.utilities.oracleai import OracleSummary\n",
"from langchain_community.vectorstores import oraclevs\n",
"from langchain_community.vectorstores.oraclevs import OracleVS\n",
"from langchain_community.vectorstores.utils import DistanceStrategy\n",
"from langchain_core.documents import Document"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, let's combine all document processing stages together. Here is the sample code below:"
]
},
{
"cell_type": "code",
"execution_count": 53,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Connection successful!\n",
"ONNX model loaded.\n",
"Number of total chunks with metadata: 3\n"
]
}
],
"source": [
"\"\"\"\n",
"In this sample example, we will use 'database' provider for both summary and embeddings.\n",
"So, we don't need to do the followings:\n",
" - set proxy for 3rd party providers\n",
" - create credential for 3rd party providers\n",
"\n",
"If you choose to use 3rd party provider, \n",
"please follow the necessary steps for proxy and credential.\n",
"\"\"\"\n",
"\n",
"# oracle connection\n",
"# please update with your username, password, hostname, and service_name\n",
"username = \"\"\n",
"password = \"\"\n",
"dsn = \"\"\n",
"\n",
"try:\n",
" conn = oracledb.connect(user=username, password=password, dsn=dsn)\n",
" print(\"Connection successful!\")\n",
"except Exception as e:\n",
" print(\"Connection failed!\")\n",
" sys.exit(1)\n",
"\n",
"\n",
"# load onnx model\n",
"# please update with your related information\n",
"onnx_dir = \"DEMO_PY_DIR\"\n",
"onnx_file = \"tinybert.onnx\"\n",
"model_name = \"demo_model\"\n",
"try:\n",
" OracleEmbeddings.load_onnx_model(conn, onnx_dir, onnx_file, model_name)\n",
" print(\"ONNX model loaded.\")\n",
"except Exception as e:\n",
" print(\"ONNX model loading failed!\")\n",
" sys.exit(1)\n",
"\n",
"\n",
"# params\n",
"# please update necessary fields with related information\n",
"loader_params = {\n",
" \"owner\": \"testuser\",\n",
" \"tablename\": \"demo_tab\",\n",
" \"colname\": \"data\",\n",
"}\n",
"summary_params = {\n",
" \"provider\": \"database\",\n",
" \"glevel\": \"S\",\n",
" \"numParagraphs\": 1,\n",
" \"language\": \"english\",\n",
"}\n",
"splitter_params = {\"normalize\": \"all\"}\n",
"embedder_params = {\"provider\": \"database\", \"model\": \"demo_model\"}\n",
"\n",
"# instantiate loader, summary, splitter, and embedder\n",
"loader = OracleDocLoader(conn=conn, params=loader_params)\n",
"summary = OracleSummary(conn=conn, params=summary_params)\n",
"splitter = OracleTextSplitter(conn=conn, params=splitter_params)\n",
"embedder = OracleEmbeddings(conn=conn, params=embedder_params)\n",
"\n",
"# process the documents\n",
"chunks_with_mdata = []\n",
"for id, doc in enumerate(docs, start=1):\n",
" summ = summary.get_summary(doc.page_content)\n",
" chunks = splitter.split_text(doc.page_content)\n",
" for ic, chunk in enumerate(chunks, start=1):\n",
" chunk_metadata = doc.metadata.copy()\n",
" chunk_metadata[\"id\"] = chunk_metadata[\"_oid\"] + \"$\" + str(id) + \"$\" + str(ic)\n",
" chunk_metadata[\"document_id\"] = str(id)\n",
" chunk_metadata[\"document_summary\"] = str(summ[0])\n",
" chunks_with_mdata.append(\n",
" Document(page_content=str(chunk), metadata=chunk_metadata)\n",
" )\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Number of total chunks with metadata: {len(chunks_with_mdata)}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"At this point, we have processed the documents and generated chunks with metadata. Next, we will create Oracle AI Vector Store with those chunks.\n",
"\n",
"Here is the sample code how to do that:"
]
},
{
"cell_type": "code",
"execution_count": 55,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Vector Store Table: oravs\n"
]
}
],
"source": [
"# create Oracle AI Vector Store\n",
"vectorstore = OracleVS.from_documents(\n",
" chunks_with_mdata,\n",
" embedder,\n",
" client=conn,\n",
" table_name=\"oravs\",\n",
" distance_strategy=DistanceStrategy.DOT_PRODUCT,\n",
")\n",
"\n",
"\"\"\" verify \"\"\"\n",
"print(f\"Vector Store Table: {vectorstore.table_name}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The example provided illustrates the creation of a vector store using the DOT_PRODUCT distance strategy. Users have the flexibility to employ various distance strategies with the Oracle AI Vector Store, as detailed in our [comprehensive guide](https://python.langchain.com/v0.1/docs/integrations/vectorstores/oracle/)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With embeddings now stored in vector stores, it is advisable to establish an index to enhance semantic search performance during query execution.\n",
"\n",
"***Note*** Should you encounter an \"insufficient memory\" error, it is recommended to increase the ***vector_memory_size*** in your database configuration\n",
"\n",
"Below is a sample code snippet for creating an index:"
]
},
{
"cell_type": "code",
"execution_count": 56,
"metadata": {},
"outputs": [],
"source": [
"oraclevs.create_index(\n",
" conn, vectorstore, params={\"idx_name\": \"hnsw_oravs\", \"idx_type\": \"HNSW\"}\n",
")\n",
"\n",
"print(\"Index created.\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This example demonstrates the creation of a default HNSW index on embeddings within the 'oravs' table. Users may adjust various parameters according to their specific needs. For detailed information on these parameters, please consult the [Oracle AI Vector Search Guide book](https://docs.oracle.com/en/database/oracle/oracle-database/23/vecse/manage-different-categories-vector-indexes.html).\n",
"\n",
"Additionally, various types of vector indices can be created to meet diverse requirements. More details can be found in our [comprehensive guide](https://python.langchain.com/v0.1/docs/integrations/vectorstores/oracle/).\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Perform Semantic Search\n",
"All set!\n",
"\n",
"We have successfully processed the documents and stored them in the vector store, followed by the creation of an index to enhance query performance. We are now prepared to proceed with semantic searches.\n",
"\n",
"Below is the sample code for this process:"
]
},
{
"cell_type": "code",
"execution_count": 58,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[Document(page_content='The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table. Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.', metadata={'_oid': '662f2f257677f3c2311a8ff999fd34e5', '_rowid': 'AAAR/xAAEAAAAAnAAC', 'id': '662f2f257677f3c2311a8ff999fd34e5$3$1', 'document_id': '3', 'document_summary': 'Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.\\n\\n'})]\n",
"[]\n",
"[(Document(page_content='The database stores LOBs differently from other data types. Creating a LOB column implicitly creates a LOB segment and a LOB index. The tablespace containing the LOB segment and LOB index, which are always stored together, may be different from the tablespace containing the table. Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.', metadata={'_oid': '662f2f257677f3c2311a8ff999fd34e5', '_rowid': 'AAAR/xAAEAAAAAnAAC', 'id': '662f2f257677f3c2311a8ff999fd34e5$3$1', 'document_id': '3', 'document_summary': 'Sometimes the database can store small amounts of LOB data in the table itself rather than in a separate LOB segment.\\n\\n'}), 0.055675752460956573)]\n",
"[]\n",
"[Document(page_content='If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.', metadata={'_oid': '662f2f253acf96b33b430b88699490a2', '_rowid': 'AAAR/xAAEAAAAAnAAA', 'id': '662f2f253acf96b33b430b88699490a2$1$1', 'document_id': '1', 'document_summary': 'If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.\\n\\n'})]\n",
"[Document(page_content='If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.', metadata={'_oid': '662f2f253acf96b33b430b88699490a2', '_rowid': 'AAAR/xAAEAAAAAnAAA', 'id': '662f2f253acf96b33b430b88699490a2$1$1', 'document_id': '1', 'document_summary': 'If the answer to any preceding questions is yes, then the database stops the search and allocates space from the specified tablespace; otherwise, space is allocated from the database default shared temporary tablespace.\\n\\n'})]\n"
]
}
],
"source": [
"query = \"What is Oracle AI Vector Store?\"\n",
"filter = {\"document_id\": [\"1\"]}\n",
"\n",
"# Similarity search without a filter\n",
"print(vectorstore.similarity_search(query, 1))\n",
"\n",
"# Similarity search with a filter\n",
"print(vectorstore.similarity_search(query, 1, filter=filter))\n",
"\n",
"# Similarity search with relevance score\n",
"print(vectorstore.similarity_search_with_score(query, 1))\n",
"\n",
"# Similarity search with relevance score with filter\n",
"print(vectorstore.similarity_search_with_score(query, 1, filter=filter))\n",
"\n",
"# Max marginal relevance search\n",
"print(vectorstore.max_marginal_relevance_search(query, 1, fetch_k=20, lambda_mult=0.5))\n",
"\n",
"# Max marginal relevance search with filter\n",
"print(\n",
" vectorstore.max_marginal_relevance_search(\n",
" query, 1, fetch_k=20, lambda_mult=0.5, filter=filter\n",
" )\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -129,7 +129,7 @@
" return obs_message\n",
"\n",
" def _act(self):\n",
" act_message = self.model(self.message_history)\n",
" act_message = self.model.invoke(self.message_history)\n",
" self.message_history.append(act_message)\n",
" action = int(self.action_parser.parse(act_message.content)[\"action\"])\n",
" return action\n",

View File

@@ -84,7 +84,7 @@
"from langchain.retrievers import KayAiRetriever\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model_name=\"gpt-3.5-turbo\")\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo\")\n",
"retriever = KayAiRetriever.create(\n",
" dataset_id=\"company\", data_types=[\"PressRelease\"], num_contexts=6\n",
")\n",

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,82 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# RAG using Upstage Layout Analysis and Groundedness Check\n",
"This example illustrates RAG using [Upstage](https://python.langchain.com/docs/integrations/providers/upstage/) Layout Analysis and Groundedness Check."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from typing import List\n",
"\n",
"from langchain_community.vectorstores import DocArrayInMemorySearch\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_core.runnables.base import RunnableSerializable\n",
"from langchain_upstage import (\n",
" ChatUpstage,\n",
" UpstageEmbeddings,\n",
" UpstageGroundednessCheck,\n",
" UpstageLayoutAnalysisLoader,\n",
")\n",
"\n",
"model = ChatUpstage()\n",
"\n",
"files = [\"/PATH/TO/YOUR/FILE.pdf\", \"/PATH/TO/YOUR/FILE2.pdf\"]\n",
"\n",
"loader = UpstageLayoutAnalysisLoader(file_path=files, split=\"element\")\n",
"\n",
"docs = loader.load()\n",
"\n",
"vectorstore = DocArrayInMemorySearch.from_documents(\n",
" docs, embedding=UpstageEmbeddings(model=\"solar-embedding-1-large\")\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"output_parser = StrOutputParser()\n",
"\n",
"retrieved_docs = retriever.get_relevant_documents(\"How many parameters in SOLAR model?\")\n",
"\n",
"groundedness_check = UpstageGroundednessCheck()\n",
"groundedness = \"\"\n",
"while groundedness != \"grounded\":\n",
" chain: RunnableSerializable = RunnablePassthrough() | prompt | model | output_parser\n",
"\n",
" result = chain.invoke(\n",
" {\n",
" \"context\": retrieved_docs,\n",
" \"question\": \"How many parameters in SOLAR model?\",\n",
" }\n",
" )\n",
"\n",
" groundedness = groundedness_check.invoke(\n",
" {\n",
" \"context\": retrieved_docs,\n",
" \"answer\": result,\n",
" }\n",
" )"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -39,12 +39,10 @@
"from langchain_community.document_loaders.recursive_url_loader import (\n",
" RecursiveUrlLoader,\n",
")\n",
"\n",
"# noqa\n",
"from langchain_community.vectorstores import Chroma\n",
"\n",
"# For our example, we'll load docs from the web\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter # noqa\n",
"from langchain_text_splitters import RecursiveCharacterTextSplitter\n",
"\n",
"DOCSTORE_DIR = \".\"\n",
"DOCSTORE_ID_KEY = \"doc_id\""

View File

@@ -274,7 +274,7 @@
"db = SQLDatabase.from_uri(\n",
" CONNECTION_STRING\n",
") # We reconnect to db so the new columns are loaded as well.\n",
"llm = ChatOpenAI(model_name=\"gpt-4\", temperature=0)\n",
"llm = ChatOpenAI(model=\"gpt-4\", temperature=0)\n",
"\n",
"sql_query_chain = (\n",
" RunnablePassthrough.assign(schema=get_schema)\n",

View File

@@ -245,7 +245,7 @@
"\n",
"\n",
"def _parse(text):\n",
" return text.strip(\"**\")"
" return text.strip('\"').strip(\"**\")"
]
},
{

View File

@@ -1,28 +1,32 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# SalesGPT - Your Context-Aware AI Sales Assistant With Knowledge Base\n",
"# SalesGPT - Context-Aware AI Sales Assistant With Knowledge Base and Ability Generate Stripe Payment Links\n",
"\n",
"This notebook demonstrates an implementation of a **Context-Aware** AI Sales agent with a Product Knowledge Base. \n",
"This notebook demonstrates an implementation of a **Context-Aware** AI Sales agent with a Product Knowledge Base which can actually close sales. \n",
"\n",
"This notebook was originally published at [filipmichalsky/SalesGPT](https://github.com/filip-michalsky/SalesGPT) by [@FilipMichalsky](https://twitter.com/FilipMichalsky).\n",
"\n",
"SalesGPT is context-aware, which means it can understand what section of a sales conversation it is in and act accordingly.\n",
" \n",
"As such, this agent can have a natural sales conversation with a prospect and behaves based on the conversation stage. Hence, this notebook demonstrates how we can use AI to automate sales development representatives activities, such as outbound sales calls. \n",
"As such, this agent can have a natural sales conversation with a prospect and behaves based on the conversation stage. Hence, this notebook demonstrates how we can use AI to automate sales development representatives activites, such as outbound sales calls. \n",
"\n",
"Additionally, the AI Sales agent has access to tools, which allow it to interact with other systems.\n",
"\n",
"Here, we show how the AI Sales Agent can use a **Product Knowledge Base** to speak about a particular's company offerings,\n",
"hence increasing relevance and reducing hallucinations.\n",
"\n",
"We leverage the [`langchain`](https://github.com/langchain-ai/langchain) library in this implementation, specifically [Custom Agent Configuration](https://langchain-langchain.vercel.app/docs/modules/agents/how_to/custom_agent_with_tool_retrieval) and are inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) architecture ."
"Furthermore, we show how our AI Sales Agent can **generate sales** by integration with the AI Agent Highway called [Mindware](https://www.mindware.co/). In practice, this allows the agent to autonomously generate a payment link for your customers **to pay for your products via Stripe**.\n",
"\n",
"We leverage the [`langchain`](https://github.com/hwchase17/langchain) library in this implementation, specifically [Custom Agent Configuration](https://langchain-langchain.vercel.app/docs/modules/agents/how_to/custom_agent_with_tool_retrieval) and are inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) architecture ."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -38,9 +42,10 @@
"import os\n",
"import re\n",
"\n",
"# import your OpenAI key\n",
"OPENAI_API_KEY = \"sk-xx\"\n",
"os.environ[\"OPENAI_API_KEY\"] = OPENAI_API_KEY\n",
"# make sure you have .env file saved locally with your API keys\n",
"from dotenv import load_dotenv\n",
"\n",
"load_dotenv()\n",
"\n",
"from typing import Any, Callable, Dict, List, Union\n",
"\n",
@@ -49,27 +54,18 @@
"from langchain.agents.conversational.prompt import FORMAT_INSTRUCTIONS\n",
"from langchain.chains import LLMChain, RetrievalQA\n",
"from langchain.chains.base import Chain\n",
"from langchain.llms import BaseLLM\n",
"from langchain.prompts import PromptTemplate\n",
"from langchain.prompts.base import StringPromptTemplate\n",
"from langchain_community.llms import BaseLLM\n",
"from langchain_community.vectorstores import Chroma\n",
"from langchain_core.agents import AgentAction, AgentFinish\n",
"from langchain_openai import ChatOpenAI, OpenAI, OpenAIEmbeddings\n",
"from langchain_text_splitters import CharacterTextSplitter\n",
"from langchain.schema import AgentAction, AgentFinish\n",
"from langchain.text_splitter import CharacterTextSplitter\n",
"from langchain.vectorstores import Chroma\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"from pydantic import BaseModel, Field"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# install additional dependencies\n",
"# ! pip install chromadb openai tiktoken"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -77,19 +73,21 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"1. Seed the SalesGPT agent\n",
"2. Run Sales Agent to decide what to do:\n",
"\n",
" a) Use a tool, such as look up Product Information in a Knowledge Base\n",
" a) Use a tool, such as look up Product Information in a Knowledge Base or Generate a Payment Link\n",
" \n",
" b) Output a response to a user \n",
"3. Run Sales Stage Recognition Agent to recognize which stage is the sales agent at and adjust their behaviour accordingly."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -98,15 +96,17 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Architecture diagram\n",
"\n",
"<img src=\"https://singularity-assets-public.s3.amazonaws.com/new_flow.png\" width=\"800\" height=\"440\"/>\n"
"<img src=\"https://demo-bucket-45.s3.amazonaws.com/new_flow2.png\" width=\"800\" height=\"440\">\n"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -131,7 +131,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
@@ -149,7 +149,7 @@
" {conversation_history}\n",
" ===\n",
"\n",
" Now determine what should be the next immediate conversation stage for the agent in the sales conversation by selecting only from the following options:\n",
" Now determine what should be the next immediate conversation stage for the agent in the sales conversation by selecting ony from the following options:\n",
" 1. Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional.\n",
" 2. Qualification: Qualify the prospect by confirming if they are the right person to talk to regarding your product/service. Ensure that they have the authority to make purchasing decisions.\n",
" 3. Value proposition: Briefly explain how your product/service can benefit the prospect. Focus on the unique selling points and value proposition of your product/service that sets it apart from competitors.\n",
@@ -171,7 +171,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
@@ -223,7 +223,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
@@ -240,13 +240,17 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"# test the intermediate chains\n",
"verbose = True\n",
"llm = ChatOpenAI(temperature=0.9)\n",
"llm = ChatOpenAI(\n",
" model=\"gpt-4-turbo-preview\",\n",
" temperature=0.9,\n",
" openai_api_key=os.getenv(\"OPENAI_API_KEY\"),\n",
")\n",
"\n",
"stage_analyzer_chain = StageAnalyzerChain.from_llm(llm, verbose=verbose)\n",
"\n",
@@ -257,7 +261,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 6,
"metadata": {},
"outputs": [
{
@@ -276,7 +280,7 @@
" \n",
" ===\n",
"\n",
" Now determine what should be the next immediate conversation stage for the agent in the sales conversation by selecting only from the following options:\n",
" Now determine what should be the next immediate conversation stage for the agent in the sales conversation by selecting ony from the following options:\n",
" 1. Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional.\n",
" 2. Qualification: Qualify the prospect by confirming if they are the right person to talk to regarding your product/service. Ensure that they have the authority to make purchasing decisions.\n",
" 3. Value proposition: Briefly explain how your product/service can benefit the prospect. Focus on the unique selling points and value proposition of your product/service that sets it apart from competitors.\n",
@@ -296,21 +300,21 @@
{
"data": {
"text/plain": [
"'1'"
"{'conversation_history': '', 'text': '1'}"
]
},
"execution_count": 7,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"stage_analyzer_chain.run(conversation_history=\"\")"
"stage_analyzer_chain.invoke({\"conversation_history\": \"\"})"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 7,
"metadata": {},
"outputs": [
{
@@ -352,32 +356,44 @@
{
"data": {
"text/plain": [
"\"I'm doing great, thank you for asking! As a Business Development Representative at Sleep Haven, I wanted to reach out to see if you are looking to achieve a better night's sleep. We provide premium mattresses that offer the most comfortable and supportive sleeping experience possible. Are you interested in exploring our sleep solutions? <END_OF_TURN>\""
"{'salesperson_name': 'Ted Lasso',\n",
" 'salesperson_role': 'Business Development Representative',\n",
" 'company_name': 'Sleep Haven',\n",
" 'company_business': 'Sleep Haven is a premium mattress company that provides customers with the most comfortable and supportive sleeping experience possible. We offer a range of high-quality mattresses, pillows, and bedding accessories that are designed to meet the unique needs of our customers.',\n",
" 'company_values': \"Our mission at Sleep Haven is to help people achieve a better night's sleep by providing them with the best possible sleep solutions. We believe that quality sleep is essential to overall health and well-being, and we are committed to helping our customers achieve optimal sleep by offering exceptional products and customer service.\",\n",
" 'conversation_purpose': 'find out whether they are looking to achieve better sleep via buying a premier mattress.',\n",
" 'conversation_history': 'Hello, this is Ted Lasso from Sleep Haven. How are you doing today? <END_OF_TURN>\\nUser: I am well, howe are you?<END_OF_TURN>',\n",
" 'conversation_type': 'call',\n",
" 'conversation_stage': 'Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional. Your greeting should be welcoming. Always clarify in your greeting the reason why you are contacting the prospect.',\n",
" 'text': \"I'm doing well, thank you for asking. The reason I'm calling is to discuss how Sleep Haven can help enhance your sleep quality with our premium mattresses. Are you currently looking for ways to achieve a better night's sleep? <END_OF_TURN>\"}"
]
},
"execution_count": 8,
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"sales_conversation_utterance_chain.run(\n",
" salesperson_name=\"Ted Lasso\",\n",
" salesperson_role=\"Business Development Representative\",\n",
" company_name=\"Sleep Haven\",\n",
" company_business=\"Sleep Haven is a premium mattress company that provides customers with the most comfortable and supportive sleeping experience possible. We offer a range of high-quality mattresses, pillows, and bedding accessories that are designed to meet the unique needs of our customers.\",\n",
" company_values=\"Our mission at Sleep Haven is to help people achieve a better night's sleep by providing them with the best possible sleep solutions. We believe that quality sleep is essential to overall health and well-being, and we are committed to helping our customers achieve optimal sleep by offering exceptional products and customer service.\",\n",
" conversation_purpose=\"find out whether they are looking to achieve better sleep via buying a premier mattress.\",\n",
" conversation_history=\"Hello, this is Ted Lasso from Sleep Haven. How are you doing today? <END_OF_TURN>\\nUser: I am well, howe are you?<END_OF_TURN>\",\n",
" conversation_type=\"call\",\n",
" conversation_stage=conversation_stages.get(\n",
" \"1\",\n",
" \"Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional.\",\n",
" ),\n",
"sales_conversation_utterance_chain.invoke(\n",
" {\n",
" \"salesperson_name\": \"Ted Lasso\",\n",
" \"salesperson_role\": \"Business Development Representative\",\n",
" \"company_name\": \"Sleep Haven\",\n",
" \"company_business\": \"Sleep Haven is a premium mattress company that provides customers with the most comfortable and supportive sleeping experience possible. We offer a range of high-quality mattresses, pillows, and bedding accessories that are designed to meet the unique needs of our customers.\",\n",
" \"company_values\": \"Our mission at Sleep Haven is to help people achieve a better night's sleep by providing them with the best possible sleep solutions. We believe that quality sleep is essential to overall health and well-being, and we are committed to helping our customers achieve optimal sleep by offering exceptional products and customer service.\",\n",
" \"conversation_purpose\": \"find out whether they are looking to achieve better sleep via buying a premier mattress.\",\n",
" \"conversation_history\": \"Hello, this is Ted Lasso from Sleep Haven. How are you doing today? <END_OF_TURN>\\nUser: I am well, howe are you?<END_OF_TURN>\",\n",
" \"conversation_type\": \"call\",\n",
" \"conversation_stage\": conversation_stages.get(\n",
" \"1\",\n",
" \"Introduction: Start the conversation by introducing yourself and your company. Be polite and respectful while keeping the tone of the conversation professional.\",\n",
" ),\n",
" }\n",
")"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -385,6 +401,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -395,7 +412,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
@@ -429,7 +446,7 @@
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
@@ -445,7 +462,7 @@
" text_splitter = CharacterTextSplitter(chunk_size=10, chunk_overlap=0)\n",
" texts = text_splitter.split_text(product_catalog)\n",
"\n",
" llm = OpenAI(temperature=0)\n",
" llm = ChatOpenAI(temperature=0)\n",
" embeddings = OpenAIEmbeddings()\n",
" docsearch = Chroma.from_texts(\n",
" texts, embeddings, collection_name=\"product-knowledge-base\"\n",
@@ -454,29 +471,12 @@
" knowledge_base = RetrievalQA.from_chain_type(\n",
" llm=llm, chain_type=\"stuff\", retriever=docsearch.as_retriever()\n",
" )\n",
" return knowledge_base\n",
"\n",
"\n",
"def get_tools(product_catalog):\n",
" # query to get_tools can be used to be embedded and relevant tools found\n",
" # see here: https://langchain-langchain.vercel.app/docs/use_cases/agents/custom_agent_with_plugin_retrieval#tool-retriever\n",
"\n",
" # we only use one tool for now, but this is highly extensible!\n",
" knowledge_base = setup_knowledge_base(product_catalog)\n",
" tools = [\n",
" Tool(\n",
" name=\"ProductSearch\",\n",
" func=knowledge_base.run,\n",
" description=\"useful for when you need to answer questions about product information\",\n",
" )\n",
" ]\n",
"\n",
" return tools"
" return knowledge_base"
]
},
{
"cell_type": "code",
"execution_count": 11,
"execution_count": 10,
"metadata": {},
"outputs": [
{
@@ -485,16 +485,18 @@
"text": [
"Created a chunk of size 940, which is longer than the specified 10\n",
"Created a chunk of size 844, which is longer than the specified 10\n",
"Created a chunk of size 837, which is longer than the specified 10\n"
"Created a chunk of size 837, which is longer than the specified 10\n",
"/Users/filipmichalsky/Odyssey/sales_bot/SalesGPT/env/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.\n",
" warn_deprecated(\n"
]
},
{
"data": {
"text/plain": [
"' We have four products available: the Classic Harmony Spring Mattress, the Plush Serenity Bamboo Mattress, the Luxury Cloud-Comfort Memory Foam Mattress, and the EcoGreen Hybrid Latex Mattress. Each product is available in different sizes, with the Classic Harmony Spring Mattress available in Queen and King sizes, the Plush Serenity Bamboo Mattress available in King size, the Luxury Cloud-Comfort Memory Foam Mattress available in Twin, Queen, and King sizes, and the EcoGreen Hybrid Latex Mattress available in Twin and Full sizes.'"
"'The Sleep Haven products available are:\\n\\n1. Luxury Cloud-Comfort Memory Foam Mattress\\n2. Classic Harmony Spring Mattress\\n3. EcoGreen Hybrid Latex Mattress\\n4. Plush Serenity Bamboo Mattress\\n\\nEach product has its unique features and price point.'"
]
},
"execution_count": 11,
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
@@ -508,12 +510,199 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up the SalesGPT Controller with the Sales Agent and Stage Analyzer and a Knowledge Base"
"### Payment gateway"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In order to set up your AI agent to use a payment gateway to generate payment links for your users you need two things:\n",
"\n",
"1. Sign up for a Stripe account and obtain a STRIPE API KEY\n",
"2. Create products you would like to sell in the Stripe UI. Then follow out example of `example_product_price_id_mapping.json`\n",
"to feed the product name to price_id mapping which allows you to generate the payment links."
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"from litellm import completion\n",
"\n",
"# set GPT model env variable\n",
"os.environ[\"GPT_MODEL\"] = \"gpt-4-turbo-preview\"\n",
"\n",
"product_price_id_mapping = {\n",
" \"ai-consulting-services\": \"price_1Ow8ofB795AYY8p1goWGZi6m\",\n",
" \"Luxury Cloud-Comfort Memory Foam Mattress\": \"price_1Owv99B795AYY8p1mjtbKyxP\",\n",
" \"Classic Harmony Spring Mattress\": \"price_1Owv9qB795AYY8p1tPcxCM6T\",\n",
" \"EcoGreen Hybrid Latex Mattress\": \"price_1OwvLDB795AYY8p1YBAMBcbi\",\n",
" \"Plush Serenity Bamboo Mattress\": \"price_1OwvMQB795AYY8p1hJN2uS3S\",\n",
"}\n",
"with open(\"example_product_price_id_mapping.json\", \"w\") as f:\n",
" json.dump(product_price_id_mapping, f)\n",
"\n",
"\n",
"def get_product_id_from_query(query, product_price_id_mapping_path):\n",
" # Load product_price_id_mapping from a JSON file\n",
" with open(product_price_id_mapping_path, \"r\") as f:\n",
" product_price_id_mapping = json.load(f)\n",
"\n",
" # Serialize the product_price_id_mapping to a JSON string for inclusion in the prompt\n",
" product_price_id_mapping_json_str = json.dumps(product_price_id_mapping)\n",
"\n",
" # Dynamically create the enum list from product_price_id_mapping keys\n",
" enum_list = list(product_price_id_mapping.values()) + [\n",
" \"No relevant product id found\"\n",
" ]\n",
" enum_list_str = json.dumps(enum_list)\n",
"\n",
" prompt = f\"\"\"\n",
" You are an expert data scientist and you are working on a project to recommend products to customers based on their needs.\n",
" Given the following query:\n",
" {query}\n",
" and the following product price id mapping:\n",
" {product_price_id_mapping_json_str}\n",
" return the price id that is most relevant to the query.\n",
" ONLY return the price id, no other text. If no relevant price id is found, return 'No relevant price id found'.\n",
" Your output will follow this schema:\n",
" {{\n",
" \"$schema\": \"http://json-schema.org/draft-07/schema#\",\n",
" \"title\": \"Price ID Response\",\n",
" \"type\": \"object\",\n",
" \"properties\": {{\n",
" \"price_id\": {{\n",
" \"type\": \"string\",\n",
" \"enum\": {enum_list_str}\n",
" }}\n",
" }},\n",
" \"required\": [\"price_id\"]\n",
" }}\n",
" Return a valid directly parsable json, dont return in it within a code snippet or add any kind of explanation!!\n",
" \"\"\"\n",
" prompt += \"{\"\n",
" response = completion(\n",
" model=os.getenv(\"GPT_MODEL\", \"gpt-3.5-turbo-1106\"),\n",
" messages=[{\"content\": prompt, \"role\": \"user\"}],\n",
" max_tokens=1000,\n",
" temperature=0,\n",
" )\n",
"\n",
" product_id = response.choices[0].message.content.strip()\n",
" return product_id"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"import json\n",
"\n",
"import requests\n",
"\n",
"\n",
"def generate_stripe_payment_link(query: str) -> str:\n",
" \"\"\"Generate a stripe payment link for a customer based on a single query string.\"\"\"\n",
"\n",
" # example testing payment gateway url\n",
" PAYMENT_GATEWAY_URL = os.getenv(\n",
" \"PAYMENT_GATEWAY_URL\", \"https://agent-payments-gateway.vercel.app/payment\"\n",
" )\n",
" PRODUCT_PRICE_MAPPING = \"example_product_price_id_mapping.json\"\n",
"\n",
" # use LLM to get the price_id from query\n",
" price_id = get_product_id_from_query(query, PRODUCT_PRICE_MAPPING)\n",
" price_id = json.loads(price_id)\n",
" payload = json.dumps(\n",
" {\"prompt\": query, **price_id, \"stripe_key\": os.getenv(\"STRIPE_API_KEY\")}\n",
" )\n",
" headers = {\n",
" \"Content-Type\": \"application/json\",\n",
" }\n",
"\n",
" response = requests.request(\n",
" \"POST\", PAYMENT_GATEWAY_URL, headers=headers, data=payload\n",
" )\n",
" return response.text"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'{\"response\":\"https://buy.stripe.com/test_6oEbLS8JB1F9bv229d\"}'"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"generate_stripe_payment_link(\n",
" query=\"Please generate a payment link for John Doe to buy two mattresses - the Classic Harmony Spring Mattress\"\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup agent tools"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"def get_tools(product_catalog):\n",
" # query to get_tools can be used to be embedded and relevant tools found\n",
" # see here: https://langchain-langchain.vercel.app/docs/use_cases/agents/custom_agent_with_plugin_retrieval#tool-retriever\n",
"\n",
" # we only use one tool for now, but this is highly extensible!\n",
" knowledge_base = setup_knowledge_base(product_catalog)\n",
" tools = [\n",
" Tool(\n",
" name=\"ProductSearch\",\n",
" func=knowledge_base.run,\n",
" description=\"useful for when you need to answer questions about product information or services offered, availability and their costs.\",\n",
" ),\n",
" Tool(\n",
" name=\"GeneratePaymentLink\",\n",
" func=generate_stripe_payment_link,\n",
" description=\"useful to close a transaction with a customer. You need to include product name and quantity and customer name in the query input.\",\n",
" ),\n",
" ]\n",
"\n",
" return tools"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### Set up the SalesGPT Controller with the Sales Agent and Stage Analyzer\n",
"\n",
"#### The Agent has access to a Knowledge Base and can autonomously sell your products via Stripe"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
@@ -563,19 +752,11 @@
" print(\"TEXT\")\n",
" print(text)\n",
" print(\"-------\")\n",
" if f\"{self.ai_prefix}:\" in text:\n",
" return AgentFinish(\n",
" {\"output\": text.split(f\"{self.ai_prefix}:\")[-1].strip()}, text\n",
" )\n",
" regex = r\"Action: (.*?)[\\n]*Action Input: (.*)\"\n",
" match = re.search(regex, text)\n",
" if not match:\n",
" ## TODO - this is not entirely reliable, sometimes results in an error.\n",
" return AgentFinish(\n",
" {\n",
" \"output\": \"I apologize, I was unable to find the answer to your question. Is there anything else I can help with?\"\n",
" },\n",
" text,\n",
" {\"output\": text.split(f\"{self.ai_prefix}:\")[-1].strip()}, text\n",
" )\n",
" # raise OutputParserException(f\"Could not parse LLM output: `{text}`\")\n",
" action = match.group(1)\n",
@@ -589,7 +770,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 19,
"metadata": {},
"outputs": [],
"source": [
@@ -647,18 +828,18 @@
"Previous conversation history:\n",
"{conversation_history}\n",
"\n",
"{salesperson_name}:\n",
"Thought:\n",
"{agent_scratchpad}\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 20,
"metadata": {},
"outputs": [],
"source": [
"class SalesGPT(Chain, BaseModel):\n",
"class SalesGPT(Chain):\n",
" \"\"\"Controller model for the Sales Agent.\"\"\"\n",
"\n",
" conversation_history: List[str] = []\n",
@@ -804,7 +985,9 @@
"\n",
" # WARNING: this output parser is NOT reliable yet\n",
" ## It makes assumptions about output from LLM which can break and throw an error\n",
" output_parser = SalesConvoOutputParser(ai_prefix=kwargs[\"salesperson_name\"])\n",
" output_parser = SalesConvoOutputParser(\n",
" ai_prefix=kwargs[\"salesperson_name\"], verbose=verbose\n",
" )\n",
"\n",
" sales_agent_with_tools = LLMSingleActionAgent(\n",
" llm_chain=llm_chain,\n",
@@ -828,6 +1011,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -835,6 +1019,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -843,7 +1028,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
@@ -880,6 +1065,7 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
@@ -888,7 +1074,7 @@
},
{
"cell_type": "code",
"execution_count": 16,
"execution_count": 22,
"metadata": {},
"outputs": [
{
@@ -897,7 +1083,9 @@
"text": [
"Created a chunk of size 940, which is longer than the specified 10\n",
"Created a chunk of size 844, which is longer than the specified 10\n",
"Created a chunk of size 837, which is longer than the specified 10\n"
"Created a chunk of size 837, which is longer than the specified 10\n",
"/Users/filipmichalsky/Odyssey/sales_bot/SalesGPT/env/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The class `langchain.agents.agent.LLMSingleActionAgent` was deprecated in langchain 0.1.0 and will be removed in 0.2.0. Use Use new agent constructor methods like create_react_agent, create_json_agent, create_structured_chat_agent, etc. instead.\n",
" warn_deprecated(\n"
]
}
],
@@ -907,7 +1095,7 @@
},
{
"cell_type": "code",
"execution_count": 17,
"execution_count": 21,
"metadata": {},
"outputs": [],
"source": [
@@ -917,7 +1105,7 @@
},
{
"cell_type": "code",
"execution_count": 18,
"execution_count": 22,
"metadata": {},
"outputs": [
{
@@ -934,14 +1122,14 @@
},
{
"cell_type": "code",
"execution_count": 19,
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: Hello, this is Ted Lasso from Sleep Haven. How are you doing today?\n"
"Ted Lasso: Good day! This is Ted Lasso from Sleep Haven. How are you doing today?\n"
]
}
],
@@ -951,18 +1139,18 @@
},
{
"cell_type": "code",
"execution_count": 20,
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"sales_agent.human_step(\n",
" \"I am well, how are you? I would like to learn more about your mattresses.\"\n",
" \"I am well, how are you? I would like to learn more about your services.\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"execution_count": 25,
"metadata": {},
"outputs": [
{
@@ -977,92 +1165,32 @@
"sales_agent.determine_conversation_stage()"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: I'm glad to hear that you're doing well! As for our mattresses, at Sleep Haven, we provide customers with the most comfortable and supportive sleeping experience possible. Our high-quality mattresses are designed to meet the unique needs of our customers. Can I ask what specifically you'd like to learn more about? \n"
]
}
],
"source": [
"sales_agent.step()"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"sales_agent.human_step(\"Yes, what materials are you mattresses made from?\")"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Conversation Stage: Needs analysis: Ask open-ended questions to uncover the prospect's needs and pain points. Listen carefully to their responses and take notes.\n"
]
}
],
"source": [
"sales_agent.determine_conversation_stage()"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: Our mattresses are made from a variety of materials, depending on the model. We have the EcoGreen Hybrid Latex Mattress, which is made from 100% natural latex harvested from eco-friendly plantations. The Plush Serenity Bamboo Mattress features a layer of plush, adaptive foam and a base of high-resilience support foam, with a bamboo-infused top layer. The Luxury Cloud-Comfort Memory Foam Mattress has an innovative, temperature-sensitive memory foam layer and a high-density foam base with cooling gel-infused particles. Finally, the Classic Harmony Spring Mattress has a robust inner spring construction and layers of plush padding, with a quilted top layer and a natural cotton cover. Is there anything specific you'd like to know about these materials?\n"
]
}
],
"source": [
"sales_agent.step()"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: I'm doing great, thank you for asking! I'm glad to hear you're interested. Sleep Haven is a premium mattress company, and we're all about offering the best sleep solutions, including top-notch mattresses, pillows, and bedding accessories. Our mission is to help you achieve a better night's sleep. May I know if you're looking to enhance your sleep experience with a new mattress or bedding accessories? \n"
]
}
],
"source": [
"sales_agent.human_step(\n",
" \"Yes, I am looking for a queen sized mattress. Do you have any mattresses in queen size?\"\n",
")"
"sales_agent.step()"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Conversation Stage: Needs analysis: Ask open-ended questions to uncover the prospect's needs and pain points. Listen carefully to their responses and take notes.\n"
]
}
],
"outputs": [],
"source": [
"sales_agent.determine_conversation_stage()"
"sales_agent.human_step(\n",
" \"Yes, I would like to improve my sleep. Can you tell me more about your products?\"\n",
")"
]
},
{
@@ -1074,7 +1202,24 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: Yes, we do have queen-sized mattresses available. We offer the Luxury Cloud-Comfort Memory Foam Mattress and the Classic Harmony Spring Mattress in queen size. Both mattresses provide exceptional comfort and support. Is there anything specific you would like to know about these options?\n"
"Conversation Stage: Needs analysis: Ask open-ended questions to uncover the prospect's needs and pain points. Listen carefully to their responses and take notes.\n"
]
}
],
"source": [
"sales_agent.determine_conversation_stage()"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: Absolutely, I'd be happy to share more about our products. At Sleep Haven, we offer a variety of high-quality mattresses designed to cater to different sleeping preferences and needs. Whether you're looking for memory foam's comfort, the support of hybrid mattresses, or the breathability of natural latex, we have options for everyone. Our pillows and bedding accessories are similarly curated to enhance your sleep quality. Every product is built with the aim of helping you achieve the restful night's sleep you deserve. What specific features are you looking for in a mattress? \n"
]
}
],
@@ -1084,16 +1229,16 @@
},
{
"cell_type": "code",
"execution_count": 29,
"execution_count": 31,
"metadata": {},
"outputs": [],
"source": [
"sales_agent.human_step(\"Yea, compare and contrast those two options, please.\")"
"sales_agent.human_step(\"What mattresses do you have and how much do they cost?\")"
]
},
{
"cell_type": "code",
"execution_count": 30,
"execution_count": 32,
"metadata": {},
"outputs": [
{
@@ -1110,14 +1255,14 @@
},
{
"cell_type": "code",
"execution_count": 31,
"execution_count": 33,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: The Luxury Cloud-Comfort Memory Foam Mattress is priced at $999 and is available in Twin, Queen, and King sizes. It features an innovative, temperature-sensitive memory foam layer and a high-density foam base. On the other hand, the Classic Harmony Spring Mattress is priced at $1,299 and is available in Queen and King sizes. It features a robust inner spring construction and layers of plush padding. Both mattresses provide exceptional comfort and support, but the Classic Harmony Spring Mattress may be a better option if you prefer the traditional feel of an inner spring mattress. Do you have any other questions about these options?\n"
"Ted Lasso: We offer two primary types of mattresses at Sleep Haven. The first is our Luxury Cloud-Comfort Memory Foam Mattress, which is priced at $999 and comes in Twin, Queen, and King sizes. The second is our Classic Harmony Spring Mattress, priced at $1,299, available in Queen and King sizes. Both are designed to provide exceptional comfort and support for a better night's sleep. Which type of mattress would you be interested in learning more about? \n"
]
}
],
@@ -1127,14 +1272,66 @@
},
{
"cell_type": "code",
"execution_count": 32,
"execution_count": 34,
"metadata": {},
"outputs": [],
"source": [
"sales_agent.human_step(\n",
" \"Great, thanks, that's it. I will talk to my wife and call back if she is onboard. Have a good day!\"\n",
" \"Okay.I would like to order two Memory Foam mattresses in Twin size please.\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Conversation Stage: Close: Ask for the sale by proposing a next step. This could be a demo, a trial or a meeting with decision-makers. Ensure to summarize what has been discussed and reiterate the benefits.\n"
]
}
],
"source": [
"sales_agent.determine_conversation_stage()"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Ted Lasso: Fantastic choice! You're on your way to a better night's sleep with our Luxury Cloud-Comfort Memory Foam Mattresses. I've generated a payment link for two Twin size mattresses for you. Here is the link to complete your purchase: https://buy.stripe.com/test_6oEg28e3V97BdDabJn. Is there anything else I can assist you with today? \n"
]
}
],
"source": [
"sales_agent.step()"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [],
"source": [
"sales_agent.human_step(\n",
" \"Great, thanks! I will discuss with my wife and will buy it if she is onboard. Have a good day!\"\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
@@ -1153,9 +1350,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
"version": "3.10.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}

View File

@@ -355,15 +355,15 @@
"metadata": {},
"outputs": [],
"source": [
"attribute_info[-2][\n",
" \"description\"\n",
"] += f\". Valid values are {sorted(latest_price['starrating'].value_counts().index.tolist())}\"\n",
"attribute_info[3][\n",
" \"description\"\n",
"] += f\". Valid values are {sorted(latest_price['maxoccupancy'].value_counts().index.tolist())}\"\n",
"attribute_info[-3][\n",
" \"description\"\n",
"] += f\". Valid values are {sorted(latest_price['country'].value_counts().index.tolist())}\""
"attribute_info[-2][\"description\"] += (\n",
" f\". Valid values are {sorted(latest_price['starrating'].value_counts().index.tolist())}\"\n",
")\n",
"attribute_info[3][\"description\"] += (\n",
" f\". Valid values are {sorted(latest_price['maxoccupancy'].value_counts().index.tolist())}\"\n",
")\n",
"attribute_info[-3][\"description\"] += (\n",
" f\". Valid values are {sorted(latest_price['country'].value_counts().index.tolist())}\"\n",
")"
]
},
{
@@ -688,9 +688,9 @@
"metadata": {},
"outputs": [],
"source": [
"attribute_info[-3][\n",
" \"description\"\n",
"] += \". NOTE: Only use the 'eq' operator if a specific country is mentioned. If a region is mentioned, include all relevant countries in filter.\"\n",
"attribute_info[-3][\"description\"] += (\n",
" \". NOTE: Only use the 'eq' operator if a specific country is mentioned. If a region is mentioned, include all relevant countries in filter.\"\n",
")\n",
"chain = load_query_constructor_runnable(\n",
" ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0),\n",
" doc_contents,\n",
@@ -1227,7 +1227,7 @@
}
],
"source": [
"results = retriever.get_relevant_documents(\n",
"results = retriever.invoke(\n",
" \"I want to stay somewhere highly rated along the coast. I want a room with a patio and a fireplace.\"\n",
")\n",
"for res in results:\n",

View File

@@ -22,7 +22,8 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.agents import AgentExecutor, Tool, ZeroShotAgent\n",
"from langchain import hub\n",
"from langchain.agents import AgentExecutor, Tool, ZeroShotAgent, create_react_agent\n",
"from langchain.chains import LLMChain\n",
"from langchain.memory import ConversationBufferMemory, ReadOnlySharedMemory\n",
"from langchain.prompts import PromptTemplate\n",
@@ -84,19 +85,7 @@
"metadata": {},
"outputs": [],
"source": [
"prefix = \"\"\"Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:\"\"\"\n",
"suffix = \"\"\"Begin!\"\n",
"\n",
"{chat_history}\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\"\n",
"\n",
"prompt = ZeroShotAgent.create_prompt(\n",
" tools,\n",
" prefix=prefix,\n",
" suffix=suffix,\n",
" input_variables=[\"input\", \"chat_history\", \"agent_scratchpad\"],\n",
")"
"prompt = hub.pull(\"hwchase17/react\")"
]
},
{
@@ -114,16 +103,14 @@
"metadata": {},
"outputs": [],
"source": [
"llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)\n",
"agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)\n",
"agent_chain = AgentExecutor.from_agent_and_tools(\n",
" agent=agent, tools=tools, verbose=True, memory=memory\n",
")"
"model = OpenAI()\n",
"agent = create_react_agent(model, tools, prompt)\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 36,
"id": "ca4bc1fb",
"metadata": {},
"outputs": [
@@ -133,15 +120,15 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I should research ChatGPT to answer this question.\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mThought: I should research ChatGPT to answer this question.\n",
"Action: Search\n",
"Action Input: \"ChatGPT\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mNov 30, 2022 ... We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... Feb 2, 2023 ... ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after ... 2 days ago ... ChatGPT recently launched a new version of its own plagiarism detection tool, with hopes that it will squelch some of the criticism around how ... An API for accessing new AI models developed by OpenAI. Feb 19, 2023 ... ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You ... ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human ... 3 days ago ... Visual ChatGPT connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting. Dec 1, 2022 ... ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting.\u001b[0m\n",
"Action Input: \"ChatGPT\"\u001B[0m\n",
"Observation: \u001B[36;1m\u001B[1;3mNov 30, 2022 ... We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... Feb 2, 2023 ... ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after ... 2 days ago ... ChatGPT recently launched a new version of its own plagiarism detection tool, with hopes that it will squelch some of the criticism around how ... An API for accessing new AI models developed by OpenAI. Feb 19, 2023 ... ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You ... ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human ... 3 days ago ... Visual ChatGPT connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting. Dec 1, 2022 ... ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a ...\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting.\u001B[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
@@ -153,10 +140,40 @@
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
},
{
"ename": "KeyboardInterrupt",
"evalue": "",
"output_type": "error",
"traceback": [
"\u001B[0;31m---------------------------------------------------------------------------\u001B[0m",
"\u001B[0;31mKeyboardInterrupt\u001B[0m Traceback (most recent call last)",
"Cell \u001B[0;32mIn[36], line 1\u001B[0m\n\u001B[0;32m----> 1\u001B[0m \u001B[43magent_executor\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43minvoke\u001B[49m\u001B[43m(\u001B[49m\u001B[43m{\u001B[49m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[38;5;124;43minput\u001B[39;49m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[43m:\u001B[49m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[38;5;124;43mWhat is ChatGPT?\u001B[39;49m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[43m}\u001B[49m\u001B[43m)\u001B[49m\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/chains/base.py:163\u001B[0m, in \u001B[0;36mChain.invoke\u001B[0;34m(self, input, config, **kwargs)\u001B[0m\n\u001B[1;32m 161\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[1;32m 162\u001B[0m run_manager\u001B[38;5;241m.\u001B[39mon_chain_error(e)\n\u001B[0;32m--> 163\u001B[0m \u001B[38;5;28;01mraise\u001B[39;00m e\n\u001B[1;32m 164\u001B[0m run_manager\u001B[38;5;241m.\u001B[39mon_chain_end(outputs)\n\u001B[1;32m 166\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m include_run_info:\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/chains/base.py:153\u001B[0m, in \u001B[0;36mChain.invoke\u001B[0;34m(self, input, config, **kwargs)\u001B[0m\n\u001B[1;32m 150\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m 151\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_validate_inputs(inputs)\n\u001B[1;32m 152\u001B[0m outputs \u001B[38;5;241m=\u001B[39m (\n\u001B[0;32m--> 153\u001B[0m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_call\u001B[49m\u001B[43m(\u001B[49m\u001B[43minputs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 154\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m new_arg_supported\n\u001B[1;32m 155\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_call(inputs)\n\u001B[1;32m 156\u001B[0m )\n\u001B[1;32m 158\u001B[0m final_outputs: Dict[\u001B[38;5;28mstr\u001B[39m, Any] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mprep_outputs(\n\u001B[1;32m 159\u001B[0m inputs, outputs, return_only_outputs\n\u001B[1;32m 160\u001B[0m )\n\u001B[1;32m 161\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mBaseException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/agents/agent.py:1432\u001B[0m, in \u001B[0;36mAgentExecutor._call\u001B[0;34m(self, inputs, run_manager)\u001B[0m\n\u001B[1;32m 1430\u001B[0m \u001B[38;5;66;03m# We now enter the agent loop (until it returns something).\u001B[39;00m\n\u001B[1;32m 1431\u001B[0m \u001B[38;5;28;01mwhile\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_should_continue(iterations, time_elapsed):\n\u001B[0;32m-> 1432\u001B[0m next_step_output \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_take_next_step\u001B[49m\u001B[43m(\u001B[49m\n\u001B[1;32m 1433\u001B[0m \u001B[43m \u001B[49m\u001B[43mname_to_tool_map\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1434\u001B[0m \u001B[43m \u001B[49m\u001B[43mcolor_mapping\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1435\u001B[0m \u001B[43m \u001B[49m\u001B[43minputs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1436\u001B[0m \u001B[43m \u001B[49m\u001B[43mintermediate_steps\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1437\u001B[0m \u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1438\u001B[0m \u001B[43m \u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1439\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28misinstance\u001B[39m(next_step_output, AgentFinish):\n\u001B[1;32m 1440\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_return(\n\u001B[1;32m 1441\u001B[0m next_step_output, intermediate_steps, run_manager\u001B[38;5;241m=\u001B[39mrun_manager\n\u001B[1;32m 1442\u001B[0m )\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/agents/agent.py:1138\u001B[0m, in \u001B[0;36mAgentExecutor._take_next_step\u001B[0;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001B[0m\n\u001B[1;32m 1129\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m_take_next_step\u001B[39m(\n\u001B[1;32m 1130\u001B[0m \u001B[38;5;28mself\u001B[39m,\n\u001B[1;32m 1131\u001B[0m name_to_tool_map: Dict[\u001B[38;5;28mstr\u001B[39m, BaseTool],\n\u001B[0;32m (...)\u001B[0m\n\u001B[1;32m 1135\u001B[0m run_manager: Optional[CallbackManagerForChainRun] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m,\n\u001B[1;32m 1136\u001B[0m ) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m Union[AgentFinish, List[Tuple[AgentAction, \u001B[38;5;28mstr\u001B[39m]]]:\n\u001B[1;32m 1137\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_consume_next_step(\n\u001B[0;32m-> 1138\u001B[0m [\n\u001B[1;32m 1139\u001B[0m a\n\u001B[1;32m 1140\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m a \u001B[38;5;129;01min\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_iter_next_step(\n\u001B[1;32m 1141\u001B[0m name_to_tool_map,\n\u001B[1;32m 1142\u001B[0m color_mapping,\n\u001B[1;32m 1143\u001B[0m inputs,\n\u001B[1;32m 1144\u001B[0m intermediate_steps,\n\u001B[1;32m 1145\u001B[0m run_manager,\n\u001B[1;32m 1146\u001B[0m )\n\u001B[1;32m 1147\u001B[0m ]\n\u001B[1;32m 1148\u001B[0m )\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/agents/agent.py:1138\u001B[0m, in \u001B[0;36m<listcomp>\u001B[0;34m(.0)\u001B[0m\n\u001B[1;32m 1129\u001B[0m \u001B[38;5;28;01mdef\u001B[39;00m \u001B[38;5;21m_take_next_step\u001B[39m(\n\u001B[1;32m 1130\u001B[0m \u001B[38;5;28mself\u001B[39m,\n\u001B[1;32m 1131\u001B[0m name_to_tool_map: Dict[\u001B[38;5;28mstr\u001B[39m, BaseTool],\n\u001B[0;32m (...)\u001B[0m\n\u001B[1;32m 1135\u001B[0m run_manager: Optional[CallbackManagerForChainRun] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m,\n\u001B[1;32m 1136\u001B[0m ) \u001B[38;5;241m-\u001B[39m\u001B[38;5;241m>\u001B[39m Union[AgentFinish, List[Tuple[AgentAction, \u001B[38;5;28mstr\u001B[39m]]]:\n\u001B[1;32m 1137\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_consume_next_step(\n\u001B[0;32m-> 1138\u001B[0m [\n\u001B[1;32m 1139\u001B[0m a\n\u001B[1;32m 1140\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m a \u001B[38;5;129;01min\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_iter_next_step(\n\u001B[1;32m 1141\u001B[0m name_to_tool_map,\n\u001B[1;32m 1142\u001B[0m color_mapping,\n\u001B[1;32m 1143\u001B[0m inputs,\n\u001B[1;32m 1144\u001B[0m intermediate_steps,\n\u001B[1;32m 1145\u001B[0m run_manager,\n\u001B[1;32m 1146\u001B[0m )\n\u001B[1;32m 1147\u001B[0m ]\n\u001B[1;32m 1148\u001B[0m )\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/agents/agent.py:1223\u001B[0m, in \u001B[0;36mAgentExecutor._iter_next_step\u001B[0;34m(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)\u001B[0m\n\u001B[1;32m 1221\u001B[0m \u001B[38;5;28;01myield\u001B[39;00m agent_action\n\u001B[1;32m 1222\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m agent_action \u001B[38;5;129;01min\u001B[39;00m actions:\n\u001B[0;32m-> 1223\u001B[0m \u001B[38;5;28;01myield\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_perform_agent_action\u001B[49m\u001B[43m(\u001B[49m\n\u001B[1;32m 1224\u001B[0m \u001B[43m \u001B[49m\u001B[43mname_to_tool_map\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mcolor_mapping\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43magent_action\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\n\u001B[1;32m 1225\u001B[0m \u001B[43m \u001B[49m\u001B[43m)\u001B[49m\n",
"File \u001B[0;32m~/code/langchain/libs/langchain/langchain/agents/agent.py:1245\u001B[0m, in \u001B[0;36mAgentExecutor._perform_agent_action\u001B[0;34m(self, name_to_tool_map, color_mapping, agent_action, run_manager)\u001B[0m\n\u001B[1;32m 1243\u001B[0m tool_run_kwargs[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mllm_prefix\u001B[39m\u001B[38;5;124m\"\u001B[39m] \u001B[38;5;241m=\u001B[39m \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[1;32m 1244\u001B[0m \u001B[38;5;66;03m# We then call the tool on the tool input to get an observation\u001B[39;00m\n\u001B[0;32m-> 1245\u001B[0m observation \u001B[38;5;241m=\u001B[39m \u001B[43mtool\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mrun\u001B[49m\u001B[43m(\u001B[49m\n\u001B[1;32m 1246\u001B[0m \u001B[43m \u001B[49m\u001B[43magent_action\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mtool_input\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1247\u001B[0m \u001B[43m \u001B[49m\u001B[43mverbose\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mverbose\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1248\u001B[0m \u001B[43m \u001B[49m\u001B[43mcolor\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mcolor\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1249\u001B[0m \u001B[43m \u001B[49m\u001B[43mcallbacks\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mget_child\u001B[49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mif\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43;01melse\u001B[39;49;00m\u001B[43m \u001B[49m\u001B[38;5;28;43;01mNone\u001B[39;49;00m\u001B[43m,\u001B[49m\n\u001B[1;32m 1250\u001B[0m \u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mtool_run_kwargs\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1251\u001B[0m \u001B[43m \u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1252\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[1;32m 1253\u001B[0m tool_run_kwargs \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39magent\u001B[38;5;241m.\u001B[39mtool_run_logging_kwargs()\n",
"File \u001B[0;32m~/code/langchain/libs/core/langchain_core/tools.py:422\u001B[0m, in \u001B[0;36mBaseTool.run\u001B[0;34m(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, **kwargs)\u001B[0m\n\u001B[1;32m 420\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m (\u001B[38;5;167;01mException\u001B[39;00m, \u001B[38;5;167;01mKeyboardInterrupt\u001B[39;00m) \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[1;32m 421\u001B[0m run_manager\u001B[38;5;241m.\u001B[39mon_tool_error(e)\n\u001B[0;32m--> 422\u001B[0m \u001B[38;5;28;01mraise\u001B[39;00m e\n\u001B[1;32m 423\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[1;32m 424\u001B[0m run_manager\u001B[38;5;241m.\u001B[39mon_tool_end(observation, color\u001B[38;5;241m=\u001B[39mcolor, name\u001B[38;5;241m=\u001B[39m\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mname, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs)\n",
"File \u001B[0;32m~/code/langchain/libs/core/langchain_core/tools.py:381\u001B[0m, in \u001B[0;36mBaseTool.run\u001B[0;34m(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, run_id, **kwargs)\u001B[0m\n\u001B[1;32m 378\u001B[0m parsed_input \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_parse_input(tool_input)\n\u001B[1;32m 379\u001B[0m tool_args, tool_kwargs \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_to_args_and_kwargs(parsed_input)\n\u001B[1;32m 380\u001B[0m observation \u001B[38;5;241m=\u001B[39m (\n\u001B[0;32m--> 381\u001B[0m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_run\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mtool_args\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrun_manager\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43mrun_manager\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mtool_kwargs\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 382\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m new_arg_supported\n\u001B[1;32m 383\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_run(\u001B[38;5;241m*\u001B[39mtool_args, \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mtool_kwargs)\n\u001B[1;32m 384\u001B[0m )\n\u001B[1;32m 385\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m ValidationError \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[1;32m 386\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;129;01mnot\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mhandle_validation_error:\n",
"File \u001B[0;32m~/code/langchain/libs/core/langchain_core/tools.py:588\u001B[0m, in \u001B[0;36mTool._run\u001B[0;34m(self, run_manager, *args, **kwargs)\u001B[0m\n\u001B[1;32m 579\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mfunc:\n\u001B[1;32m 580\u001B[0m new_argument_supported \u001B[38;5;241m=\u001B[39m signature(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mfunc)\u001B[38;5;241m.\u001B[39mparameters\u001B[38;5;241m.\u001B[39mget(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mcallbacks\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n\u001B[1;32m 581\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m (\n\u001B[1;32m 582\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mfunc(\n\u001B[1;32m 583\u001B[0m \u001B[38;5;241m*\u001B[39margs,\n\u001B[1;32m 584\u001B[0m callbacks\u001B[38;5;241m=\u001B[39mrun_manager\u001B[38;5;241m.\u001B[39mget_child() \u001B[38;5;28;01mif\u001B[39;00m run_manager \u001B[38;5;28;01melse\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m,\n\u001B[1;32m 585\u001B[0m \u001B[38;5;241m*\u001B[39m\u001B[38;5;241m*\u001B[39mkwargs,\n\u001B[1;32m 586\u001B[0m )\n\u001B[1;32m 587\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m new_argument_supported\n\u001B[0;32m--> 588\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mfunc\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 589\u001B[0m )\n\u001B[1;32m 590\u001B[0m \u001B[38;5;28;01mraise\u001B[39;00m \u001B[38;5;167;01mNotImplementedError\u001B[39;00m(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mTool does not support sync\u001B[39m\u001B[38;5;124m\"\u001B[39m)\n",
"File \u001B[0;32m~/code/langchain/libs/community/langchain_community/utilities/google_search.py:94\u001B[0m, in \u001B[0;36mGoogleSearchAPIWrapper.run\u001B[0;34m(self, query)\u001B[0m\n\u001B[1;32m 92\u001B[0m \u001B[38;5;250m\u001B[39m\u001B[38;5;124;03m\"\"\"Run query through GoogleSearch and parse result.\"\"\"\u001B[39;00m\n\u001B[1;32m 93\u001B[0m snippets \u001B[38;5;241m=\u001B[39m []\n\u001B[0;32m---> 94\u001B[0m results \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_google_search_results\u001B[49m\u001B[43m(\u001B[49m\u001B[43mquery\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mnum\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mk\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 95\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mlen\u001B[39m(results) \u001B[38;5;241m==\u001B[39m \u001B[38;5;241m0\u001B[39m:\n\u001B[1;32m 96\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mNo good Google Search Result was found\u001B[39m\u001B[38;5;124m\"\u001B[39m\n",
"File \u001B[0;32m~/code/langchain/libs/community/langchain_community/utilities/google_search.py:62\u001B[0m, in \u001B[0;36mGoogleSearchAPIWrapper._google_search_results\u001B[0;34m(self, search_term, **kwargs)\u001B[0m\n\u001B[1;32m 60\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39msiterestrict:\n\u001B[1;32m 61\u001B[0m cse \u001B[38;5;241m=\u001B[39m cse\u001B[38;5;241m.\u001B[39msiterestrict()\n\u001B[0;32m---> 62\u001B[0m res \u001B[38;5;241m=\u001B[39m \u001B[43mcse\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mlist\u001B[49m\u001B[43m(\u001B[49m\u001B[43mq\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[43msearch_term\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mcx\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mgoogle_cse_id\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mexecute\u001B[49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 63\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m res\u001B[38;5;241m.\u001B[39mget(\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mitems\u001B[39m\u001B[38;5;124m\"\u001B[39m, [])\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/googleapiclient/_helpers.py:130\u001B[0m, in \u001B[0;36mpositional.<locals>.positional_decorator.<locals>.positional_wrapper\u001B[0;34m(*args, **kwargs)\u001B[0m\n\u001B[1;32m 128\u001B[0m \u001B[38;5;28;01melif\u001B[39;00m positional_parameters_enforcement \u001B[38;5;241m==\u001B[39m POSITIONAL_WARNING:\n\u001B[1;32m 129\u001B[0m logger\u001B[38;5;241m.\u001B[39mwarning(message)\n\u001B[0;32m--> 130\u001B[0m \u001B[38;5;28;01mreturn\u001B[39;00m \u001B[43mwrapped\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/googleapiclient/http.py:923\u001B[0m, in \u001B[0;36mHttpRequest.execute\u001B[0;34m(self, http, num_retries)\u001B[0m\n\u001B[1;32m 920\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mheaders[\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124mcontent-length\u001B[39m\u001B[38;5;124m\"\u001B[39m] \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mstr\u001B[39m(\u001B[38;5;28mlen\u001B[39m(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mbody))\n\u001B[1;32m 922\u001B[0m \u001B[38;5;66;03m# Handle retries for server-side errors.\u001B[39;00m\n\u001B[0;32m--> 923\u001B[0m resp, content \u001B[38;5;241m=\u001B[39m \u001B[43m_retry_request\u001B[49m\u001B[43m(\u001B[49m\n\u001B[1;32m 924\u001B[0m \u001B[43m \u001B[49m\u001B[43mhttp\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 925\u001B[0m \u001B[43m \u001B[49m\u001B[43mnum_retries\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 926\u001B[0m \u001B[43m \u001B[49m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[38;5;124;43mrequest\u001B[39;49m\u001B[38;5;124;43m\"\u001B[39;49m\u001B[43m,\u001B[49m\n\u001B[1;32m 927\u001B[0m \u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_sleep\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 928\u001B[0m \u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_rand\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 929\u001B[0m \u001B[43m \u001B[49m\u001B[38;5;28;43mstr\u001B[39;49m\u001B[43m(\u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43muri\u001B[49m\u001B[43m)\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 930\u001B[0m \u001B[43m \u001B[49m\u001B[43mmethod\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mstr\u001B[39;49m\u001B[43m(\u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mmethod\u001B[49m\u001B[43m)\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 931\u001B[0m \u001B[43m \u001B[49m\u001B[43mbody\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mbody\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 932\u001B[0m \u001B[43m \u001B[49m\u001B[43mheaders\u001B[49m\u001B[38;5;241;43m=\u001B[39;49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mheaders\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 933\u001B[0m \u001B[43m\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 935\u001B[0m \u001B[38;5;28;01mfor\u001B[39;00m callback \u001B[38;5;129;01min\u001B[39;00m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mresponse_callbacks:\n\u001B[1;32m 936\u001B[0m callback(resp)\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/googleapiclient/http.py:191\u001B[0m, in \u001B[0;36m_retry_request\u001B[0;34m(http, num_retries, req_type, sleep, rand, uri, method, *args, **kwargs)\u001B[0m\n\u001B[1;32m 189\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m 190\u001B[0m exception \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;01mNone\u001B[39;00m\n\u001B[0;32m--> 191\u001B[0m resp, content \u001B[38;5;241m=\u001B[39m \u001B[43mhttp\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mrequest\u001B[49m\u001B[43m(\u001B[49m\u001B[43muri\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mmethod\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43margs\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[38;5;241;43m*\u001B[39;49m\u001B[43mkwargs\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 192\u001B[0m \u001B[38;5;66;03m# Retry on SSL errors and socket timeout errors.\u001B[39;00m\n\u001B[1;32m 193\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m _ssl_SSLError \u001B[38;5;28;01mas\u001B[39;00m ssl_error:\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/httplib2/__init__.py:1724\u001B[0m, in \u001B[0;36mHttp.request\u001B[0;34m(self, uri, method, body, headers, redirections, connection_type)\u001B[0m\n\u001B[1;32m 1722\u001B[0m content \u001B[38;5;241m=\u001B[39m \u001B[38;5;124mb\u001B[39m\u001B[38;5;124m\"\u001B[39m\u001B[38;5;124m\"\u001B[39m\n\u001B[1;32m 1723\u001B[0m \u001B[38;5;28;01melse\u001B[39;00m:\n\u001B[0;32m-> 1724\u001B[0m (response, content) \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_request\u001B[49m\u001B[43m(\u001B[49m\n\u001B[1;32m 1725\u001B[0m \u001B[43m \u001B[49m\u001B[43mconn\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mauthority\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43muri\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrequest_uri\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mmethod\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mbody\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mheaders\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mredirections\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mcachekey\u001B[49m\u001B[43m,\u001B[49m\n\u001B[1;32m 1726\u001B[0m \u001B[43m \u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1727\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m \u001B[38;5;167;01mException\u001B[39;00m \u001B[38;5;28;01mas\u001B[39;00m e:\n\u001B[1;32m 1728\u001B[0m is_timeout \u001B[38;5;241m=\u001B[39m \u001B[38;5;28misinstance\u001B[39m(e, socket\u001B[38;5;241m.\u001B[39mtimeout)\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/httplib2/__init__.py:1444\u001B[0m, in \u001B[0;36mHttp._request\u001B[0;34m(self, conn, host, absolute_uri, request_uri, method, body, headers, redirections, cachekey)\u001B[0m\n\u001B[1;32m 1441\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m auth:\n\u001B[1;32m 1442\u001B[0m auth\u001B[38;5;241m.\u001B[39mrequest(method, request_uri, headers, body)\n\u001B[0;32m-> 1444\u001B[0m (response, content) \u001B[38;5;241m=\u001B[39m \u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43m_conn_request\u001B[49m\u001B[43m(\u001B[49m\u001B[43mconn\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mrequest_uri\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mmethod\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mbody\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[43mheaders\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1446\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m auth:\n\u001B[1;32m 1447\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m auth\u001B[38;5;241m.\u001B[39mresponse(response, body):\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/httplib2/__init__.py:1366\u001B[0m, in \u001B[0;36mHttp._conn_request\u001B[0;34m(self, conn, request_uri, method, body, headers)\u001B[0m\n\u001B[1;32m 1364\u001B[0m \u001B[38;5;28;01mtry\u001B[39;00m:\n\u001B[1;32m 1365\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m conn\u001B[38;5;241m.\u001B[39msock \u001B[38;5;129;01mis\u001B[39;00m \u001B[38;5;28;01mNone\u001B[39;00m:\n\u001B[0;32m-> 1366\u001B[0m \u001B[43mconn\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mconnect\u001B[49m\u001B[43m(\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1367\u001B[0m conn\u001B[38;5;241m.\u001B[39mrequest(method, request_uri, body, headers)\n\u001B[1;32m 1368\u001B[0m \u001B[38;5;28;01mexcept\u001B[39;00m socket\u001B[38;5;241m.\u001B[39mtimeout:\n",
"File \u001B[0;32m~/code/langchain/.venv/lib/python3.10/site-packages/httplib2/__init__.py:1156\u001B[0m, in \u001B[0;36mHTTPSConnectionWithTimeout.connect\u001B[0;34m(self)\u001B[0m\n\u001B[1;32m 1154\u001B[0m \u001B[38;5;28;01mif\u001B[39;00m has_timeout(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mtimeout):\n\u001B[1;32m 1155\u001B[0m sock\u001B[38;5;241m.\u001B[39msettimeout(\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mtimeout)\n\u001B[0;32m-> 1156\u001B[0m \u001B[43msock\u001B[49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mconnect\u001B[49m\u001B[43m(\u001B[49m\u001B[43m(\u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mhost\u001B[49m\u001B[43m,\u001B[49m\u001B[43m \u001B[49m\u001B[38;5;28;43mself\u001B[39;49m\u001B[38;5;241;43m.\u001B[39;49m\u001B[43mport\u001B[49m\u001B[43m)\u001B[49m\u001B[43m)\u001B[49m\n\u001B[1;32m 1158\u001B[0m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39msock \u001B[38;5;241m=\u001B[39m \u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39m_context\u001B[38;5;241m.\u001B[39mwrap_socket(sock, server_hostname\u001B[38;5;241m=\u001B[39m\u001B[38;5;28mself\u001B[39m\u001B[38;5;241m.\u001B[39mhost)\n\u001B[1;32m 1160\u001B[0m \u001B[38;5;66;03m# Python 3.3 compatibility: emulate the check_hostname behavior\u001B[39;00m\n",
"\u001B[0;31mKeyboardInterrupt\u001B[0m: "
]
}
],
"source": [
"agent_chain.run(input=\"What is ChatGPT?\")"
"agent_executor.invoke({\"input\": \"What is ChatGPT?\"})"
]
},
{
@@ -179,15 +196,15 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out who developed ChatGPT\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mThought: I need to find out who developed ChatGPT\n",
"Action: Search\n",
"Action Input: Who developed ChatGPT\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... Feb 15, 2023 ... Who owns Chat GPT? Chat GPT is owned and developed by AI research and deployment company, OpenAI. The organization is headquartered in San ... Feb 8, 2023 ... ChatGPT is an AI chatbot developed by San Francisco-based startup OpenAI. OpenAI was co-founded in 2015 by Elon Musk and Sam Altman and is ... Dec 7, 2022 ... ChatGPT is an AI chatbot designed and developed by OpenAI. The bot works by generating text responses based on human-user input, like questions ... Jan 12, 2023 ... In 2019, Microsoft invested $1 billion in OpenAI, the tiny San Francisco company that designed ChatGPT. And in the years since, it has quietly ... Jan 25, 2023 ... The inside story of ChatGPT: How OpenAI founder Sam Altman built the world's hottest technology with billions from Microsoft. Dec 3, 2022 ... ChatGPT went viral on social media for its ability to do anything from code to write essays. · The company that created the AI chatbot has a ... Jan 17, 2023 ... While many Americans were nursing hangovers on New Year's Day, 22-year-old Edward Tian was working feverishly on a new app to combat misuse ... ChatGPT is a language model created by OpenAI, an artificial intelligence research laboratory consisting of a team of researchers and engineers focused on ... 1 day ago ... Everyone is talking about ChatGPT, developed by OpenAI. This is such a great tool that has helped to make AI more accessible to a wider ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: ChatGPT was developed by OpenAI.\u001b[0m\n",
"Action Input: Who developed ChatGPT\u001B[0m\n",
"Observation: \u001B[36;1m\u001B[1;3mChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... Feb 15, 2023 ... Who owns Chat GPT? Chat GPT is owned and developed by AI research and deployment company, OpenAI. The organization is headquartered in San ... Feb 8, 2023 ... ChatGPT is an AI chatbot developed by San Francisco-based startup OpenAI. OpenAI was co-founded in 2015 by Elon Musk and Sam Altman and is ... Dec 7, 2022 ... ChatGPT is an AI chatbot designed and developed by OpenAI. The bot works by generating text responses based on human-user input, like questions ... Jan 12, 2023 ... In 2019, Microsoft invested $1 billion in OpenAI, the tiny San Francisco company that designed ChatGPT. And in the years since, it has quietly ... Jan 25, 2023 ... The inside story of ChatGPT: How OpenAI founder Sam Altman built the world's hottest technology with billions from Microsoft. Dec 3, 2022 ... ChatGPT went viral on social media for its ability to do anything from code to write essays. · The company that created the AI chatbot has a ... Jan 17, 2023 ... While many Americans were nursing hangovers on New Year's Day, 22-year-old Edward Tian was working feverishly on a new app to combat misuse ... ChatGPT is a language model created by OpenAI, an artificial intelligence research laboratory consisting of a team of researchers and engineers focused on ... 1 day ago ... Everyone is talking about ChatGPT, developed by OpenAI. This is such a great tool that has helped to make AI more accessible to a wider ...\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m I now know the final answer\n",
"Final Answer: ChatGPT was developed by OpenAI.\u001B[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
@@ -202,7 +219,7 @@
}
],
"source": [
"agent_chain.run(input=\"Who developed it?\")"
"agent_executor.invoke({\"input\": \"Who developed it?\"})"
]
},
{
@@ -217,14 +234,14 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to simplify the conversation for a 5 year old.\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mThought: I need to simplify the conversation for a 5 year old.\n",
"Action: Summary\n",
"Action Input: My daughter 5 years old\u001b[0m\n",
"Action Input: My daughter 5 years old\u001B[0m\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThis is a conversation between a human and a bot:\n",
"\u001B[32;1m\u001B[1;3mThis is a conversation between a human and a bot:\n",
"\n",
"Human: What is ChatGPT?\n",
"AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting.\n",
@@ -232,16 +249,16 @@
"AI: ChatGPT was developed by OpenAI.\n",
"\n",
"Write a summary of the conversation for My daughter 5 years old:\n",
"\u001b[0m\n",
"\u001B[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001B[1m> Finished chain.\u001B[0m\n",
"\n",
"Observation: \u001b[33;1m\u001b[1;3m\n",
"The conversation was about ChatGPT, an artificial intelligence chatbot. It was created by OpenAI and can send and receive images while chatting.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot created by OpenAI that can send and receive images while chatting.\u001b[0m\n",
"Observation: \u001B[33;1m\u001B[1;3m\n",
"The conversation was about ChatGPT, an artificial intelligence chatbot. It was created by OpenAI and can send and receive images while chatting.\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot created by OpenAI that can send and receive images while chatting.\u001B[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
@@ -256,8 +273,8 @@
}
],
"source": [
"agent_chain.run(\n",
" input=\"Thanks. Summarize the conversation, for my daughter 5 years old.\"\n",
"agent_executor.invoke(\n",
" {\"input\": \"Thanks. Summarize the conversation, for my daughter 5 years old.\"}\n",
")"
]
},
@@ -289,9 +306,17 @@
}
],
"source": [
"print(agent_chain.memory.buffer)"
"print(agent_executor.memory.buffer)"
]
},
{
"cell_type": "markdown",
"id": "84ca95c30e262e00",
"metadata": {
"collapsed": false
},
"source": []
},
{
"cell_type": "markdown",
"id": "cc3d0aa4",
@@ -340,25 +365,9 @@
" ),\n",
"]\n",
"\n",
"prefix = \"\"\"Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:\"\"\"\n",
"suffix = \"\"\"Begin!\"\n",
"\n",
"{chat_history}\n",
"Question: {input}\n",
"{agent_scratchpad}\"\"\"\n",
"\n",
"prompt = ZeroShotAgent.create_prompt(\n",
" tools,\n",
" prefix=prefix,\n",
" suffix=suffix,\n",
" input_variables=[\"input\", \"chat_history\", \"agent_scratchpad\"],\n",
")\n",
"\n",
"llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)\n",
"agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)\n",
"agent_chain = AgentExecutor.from_agent_and_tools(\n",
" agent=agent, tools=tools, verbose=True, memory=memory\n",
")"
"prompt = hub.pull(\"hwchase17/react\")\n",
"agent = create_react_agent(model, tools, prompt)\n",
"agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory)"
]
},
{
@@ -373,15 +382,15 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I should research ChatGPT to answer this question.\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mThought: I should research ChatGPT to answer this question.\n",
"Action: Search\n",
"Action Input: \"ChatGPT\"\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mNov 30, 2022 ... We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... Feb 2, 2023 ... ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after ... 2 days ago ... ChatGPT recently launched a new version of its own plagiarism detection tool, with hopes that it will squelch some of the criticism around how ... An API for accessing new AI models developed by OpenAI. Feb 19, 2023 ... ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You ... ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human ... 3 days ago ... Visual ChatGPT connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting. Dec 1, 2022 ... ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting.\u001b[0m\n",
"Action Input: \"ChatGPT\"\u001B[0m\n",
"Observation: \u001B[36;1m\u001B[1;3mNov 30, 2022 ... We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... ChatGPT. We've trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer ... Feb 2, 2023 ... ChatGPT, the popular chatbot from OpenAI, is estimated to have reached 100 million monthly active users in January, just two months after ... 2 days ago ... ChatGPT recently launched a new version of its own plagiarism detection tool, with hopes that it will squelch some of the criticism around how ... An API for accessing new AI models developed by OpenAI. Feb 19, 2023 ... ChatGPT is an AI chatbot system that OpenAI released in November to show off and test what a very large, powerful AI system can accomplish. You ... ChatGPT is fine-tuned from GPT-3.5, a language model trained to produce text. ChatGPT was optimized for dialogue by using Reinforcement Learning with Human ... 3 days ago ... Visual ChatGPT connects ChatGPT and a series of Visual Foundation Models to enable sending and receiving images during chatting. Dec 1, 2022 ... ChatGPT is a natural language processing tool driven by AI technology that allows you to have human-like conversations and much more with a ...\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting.\u001B[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
@@ -396,7 +405,7 @@
}
],
"source": [
"agent_chain.run(input=\"What is ChatGPT?\")"
"agent_executor.invoke({\"input\": \"What is ChatGPT?\"})"
]
},
{
@@ -411,15 +420,15 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to find out who developed ChatGPT\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mThought: I need to find out who developed ChatGPT\n",
"Action: Search\n",
"Action Input: Who developed ChatGPT\u001b[0m\n",
"Observation: \u001b[36;1m\u001b[1;3mChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... Feb 15, 2023 ... Who owns Chat GPT? Chat GPT is owned and developed by AI research and deployment company, OpenAI. The organization is headquartered in San ... Feb 8, 2023 ... ChatGPT is an AI chatbot developed by San Francisco-based startup OpenAI. OpenAI was co-founded in 2015 by Elon Musk and Sam Altman and is ... Dec 7, 2022 ... ChatGPT is an AI chatbot designed and developed by OpenAI. The bot works by generating text responses based on human-user input, like questions ... Jan 12, 2023 ... In 2019, Microsoft invested $1 billion in OpenAI, the tiny San Francisco company that designed ChatGPT. And in the years since, it has quietly ... Jan 25, 2023 ... The inside story of ChatGPT: How OpenAI founder Sam Altman built the world's hottest technology with billions from Microsoft. Dec 3, 2022 ... ChatGPT went viral on social media for its ability to do anything from code to write essays. · The company that created the AI chatbot has a ... Jan 17, 2023 ... While many Americans were nursing hangovers on New Year's Day, 22-year-old Edward Tian was working feverishly on a new app to combat misuse ... ChatGPT is a language model created by OpenAI, an artificial intelligence research laboratory consisting of a team of researchers and engineers focused on ... 1 day ago ... Everyone is talking about ChatGPT, developed by OpenAI. This is such a great tool that has helped to make AI more accessible to a wider ...\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer\n",
"Final Answer: ChatGPT was developed by OpenAI.\u001b[0m\n",
"Action Input: Who developed ChatGPT\u001B[0m\n",
"Observation: \u001B[36;1m\u001B[1;3mChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large ... Feb 15, 2023 ... Who owns Chat GPT? Chat GPT is owned and developed by AI research and deployment company, OpenAI. The organization is headquartered in San ... Feb 8, 2023 ... ChatGPT is an AI chatbot developed by San Francisco-based startup OpenAI. OpenAI was co-founded in 2015 by Elon Musk and Sam Altman and is ... Dec 7, 2022 ... ChatGPT is an AI chatbot designed and developed by OpenAI. The bot works by generating text responses based on human-user input, like questions ... Jan 12, 2023 ... In 2019, Microsoft invested $1 billion in OpenAI, the tiny San Francisco company that designed ChatGPT. And in the years since, it has quietly ... Jan 25, 2023 ... The inside story of ChatGPT: How OpenAI founder Sam Altman built the world's hottest technology with billions from Microsoft. Dec 3, 2022 ... ChatGPT went viral on social media for its ability to do anything from code to write essays. · The company that created the AI chatbot has a ... Jan 17, 2023 ... While many Americans were nursing hangovers on New Year's Day, 22-year-old Edward Tian was working feverishly on a new app to combat misuse ... ChatGPT is a language model created by OpenAI, an artificial intelligence research laboratory consisting of a team of researchers and engineers focused on ... 1 day ago ... Everyone is talking about ChatGPT, developed by OpenAI. This is such a great tool that has helped to make AI more accessible to a wider ...\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m I now know the final answer\n",
"Final Answer: ChatGPT was developed by OpenAI.\u001B[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
@@ -434,7 +443,7 @@
}
],
"source": [
"agent_chain.run(input=\"Who developed it?\")"
"agent_executor.invoke({\"input\": \"Who developed it?\"})"
]
},
{
@@ -449,14 +458,14 @@
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3mThought: I need to simplify the conversation for a 5 year old.\n",
"\u001B[1m> Entering new AgentExecutor chain...\u001B[0m\n",
"\u001B[32;1m\u001B[1;3mThought: I need to simplify the conversation for a 5 year old.\n",
"Action: Summary\n",
"Action Input: My daughter 5 years old\u001b[0m\n",
"Action Input: My daughter 5 years old\u001B[0m\n",
"\n",
"\u001b[1m> Entering new LLMChain chain...\u001b[0m\n",
"\u001B[1m> Entering new LLMChain chain...\u001B[0m\n",
"Prompt after formatting:\n",
"\u001b[32;1m\u001b[1;3mThis is a conversation between a human and a bot:\n",
"\u001B[32;1m\u001B[1;3mThis is a conversation between a human and a bot:\n",
"\n",
"Human: What is ChatGPT?\n",
"AI: ChatGPT is an artificial intelligence chatbot developed by OpenAI and launched in November 2022. It is built on top of OpenAI's GPT-3 family of large language models and is optimized for dialogue by using Reinforcement Learning with Human-in-the-Loop. It is also capable of sending and receiving images during chatting.\n",
@@ -464,16 +473,16 @@
"AI: ChatGPT was developed by OpenAI.\n",
"\n",
"Write a summary of the conversation for My daughter 5 years old:\n",
"\u001b[0m\n",
"\u001B[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n",
"\u001B[1m> Finished chain.\u001B[0m\n",
"\n",
"Observation: \u001b[33;1m\u001b[1;3m\n",
"The conversation was about ChatGPT, an artificial intelligence chatbot developed by OpenAI. It is designed to have conversations with humans and can also send and receive images.\u001b[0m\n",
"Thought:\u001b[32;1m\u001b[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI that can have conversations with humans and send and receive images.\u001b[0m\n",
"Observation: \u001B[33;1m\u001B[1;3m\n",
"The conversation was about ChatGPT, an artificial intelligence chatbot developed by OpenAI. It is designed to have conversations with humans and can also send and receive images.\u001B[0m\n",
"Thought:\u001B[32;1m\u001B[1;3m I now know the final answer.\n",
"Final Answer: ChatGPT is an artificial intelligence chatbot developed by OpenAI that can have conversations with humans and send and receive images.\u001B[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
"\u001B[1m> Finished chain.\u001B[0m\n"
]
},
{
@@ -488,8 +497,8 @@
}
],
"source": [
"agent_chain.run(\n",
" input=\"Thanks. Summarize the conversation, for my daughter 5 years old.\"\n",
"agent_executor.invoke(\n",
" {\"input\": \"Thanks. Summarize the conversation, for my daughter 5 years old.\"}\n",
")"
]
},
@@ -524,7 +533,7 @@
}
],
"source": [
"print(agent_chain.memory.buffer)"
"print(agent_executor.memory.buffer)"
]
}
],

View File

@@ -647,7 +647,7 @@ Sometimes you may not have the luxury of using OpenAI or other service-hosted la
import logging
import torch
from transformers import AutoTokenizer, GPT2TokenizerFast, pipeline, AutoModelForSeq2SeqLM, AutoModelForCausalLM
from langchain_community.llms import HuggingFacePipeline
from langchain_huggingface import HuggingFacePipeline
# Note: This model requires a large GPU, e.g. an 80GB A100. See documentation for other ways to run private non-OpenAI models.
model_id = "google/flan-ul2"
@@ -992,7 +992,7 @@ Now that you have some examples (with manually corrected output SQL), you can do
```python
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
from langchain.chains.sql_database.prompt import _sqlite_prompt, PROMPT_SUFFIX
from langchain_community.embeddings.huggingface import HuggingFaceEmbeddings
from langchain_huggingface import HuggingFaceEmbeddings
from langchain.prompts.example_selector.semantic_similarity import SemanticSimilarityExampleSelector
from langchain_community.vectorstores import Chroma

View File

@@ -0,0 +1,199 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 2,
"id": "c48812ed-35bd-4fbe-9a2c-6c7335e5645e",
"metadata": {},
"outputs": [],
"source": [
"from langchain_anthropic import ChatAnthropic\n",
"from langchain_core.runnables import ConfigurableField\n",
"from langchain_core.tools import tool\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"\n",
"@tool\n",
"def multiply(x: float, y: float) -> float:\n",
" \"\"\"Multiply 'x' times 'y'.\"\"\"\n",
" return x * y\n",
"\n",
"\n",
"@tool\n",
"def exponentiate(x: float, y: float) -> float:\n",
" \"\"\"Raise 'x' to the 'y'.\"\"\"\n",
" return x**y\n",
"\n",
"\n",
"@tool\n",
"def add(x: float, y: float) -> float:\n",
" \"\"\"Add 'x' and 'y'.\"\"\"\n",
" return x + y\n",
"\n",
"\n",
"tools = [multiply, exponentiate, add]\n",
"\n",
"gpt35 = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0).bind_tools(tools)\n",
"claude3 = ChatAnthropic(model=\"claude-3-sonnet-20240229\").bind_tools(tools)\n",
"llm_with_tools = gpt35.configurable_alternatives(\n",
" ConfigurableField(id=\"llm\"), default_key=\"gpt35\", claude3=claude3\n",
")"
]
},
{
"cell_type": "markdown",
"id": "9c186263-1b98-4cb2-b6d1-71f65eb0d811",
"metadata": {},
"source": [
"# LangGraph"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "28fc2c60-7dbc-428a-8983-1a6a15ea30d2",
"metadata": {},
"outputs": [],
"source": [
"import operator\n",
"from typing import Annotated, Sequence, TypedDict\n",
"\n",
"from langchain_core.messages import AIMessage, BaseMessage, HumanMessage, ToolMessage\n",
"from langchain_core.runnables import RunnableLambda\n",
"from langgraph.graph import END, StateGraph\n",
"\n",
"\n",
"class AgentState(TypedDict):\n",
" messages: Annotated[Sequence[BaseMessage], operator.add]\n",
"\n",
"\n",
"def should_continue(state):\n",
" return \"continue\" if state[\"messages\"][-1].tool_calls else \"end\"\n",
"\n",
"\n",
"def call_model(state, config):\n",
" return {\"messages\": [llm_with_tools.invoke(state[\"messages\"], config=config)]}\n",
"\n",
"\n",
"def _invoke_tool(tool_call):\n",
" tool = {tool.name: tool for tool in tools}[tool_call[\"name\"]]\n",
" return ToolMessage(tool.invoke(tool_call[\"args\"]), tool_call_id=tool_call[\"id\"])\n",
"\n",
"\n",
"tool_executor = RunnableLambda(_invoke_tool)\n",
"\n",
"\n",
"def call_tools(state):\n",
" last_message = state[\"messages\"][-1]\n",
" return {\"messages\": tool_executor.batch(last_message.tool_calls)}\n",
"\n",
"\n",
"workflow = StateGraph(AgentState)\n",
"workflow.add_node(\"agent\", call_model)\n",
"workflow.add_node(\"action\", call_tools)\n",
"workflow.set_entry_point(\"agent\")\n",
"workflow.add_conditional_edges(\n",
" \"agent\",\n",
" should_continue,\n",
" {\n",
" \"continue\": \"action\",\n",
" \"end\": END,\n",
" },\n",
")\n",
"workflow.add_edge(\"action\", \"agent\")\n",
"graph = workflow.compile()"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "3710e724-2595-4625-ba3a-effb81e66e4a",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"),\n",
" AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_6yMU2WsS4Bqgi1WxFHxtfJRc', 'function': {'arguments': '{\"x\": 8, \"y\": 2.743}', 'name': 'exponentiate'}, 'type': 'function'}, {'id': 'call_GAL3dQiKFF9XEV0RrRLPTvVp', 'function': {'arguments': '{\"x\": 17.24, \"y\": -918.1241}', 'name': 'add'}, 'type': 'function'}]}, response_metadata={'token_usage': {'completion_tokens': 58, 'prompt_tokens': 168, 'total_tokens': 226}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-528302fc-7acf-4c11-82c4-119ccf40c573-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 8, 'y': 2.743}, 'id': 'call_6yMU2WsS4Bqgi1WxFHxtfJRc'}, {'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'call_GAL3dQiKFF9XEV0RrRLPTvVp'}]),\n",
" ToolMessage(content='300.03770462067547', tool_call_id='call_6yMU2WsS4Bqgi1WxFHxtfJRc'),\n",
" ToolMessage(content='-900.8841', tool_call_id='call_GAL3dQiKFF9XEV0RrRLPTvVp'),\n",
" AIMessage(content='The result of \\\\(3 + 5^{2.743}\\\\) is approximately 300.04, and the result of \\\\(17.24 - 918.1241\\\\) is approximately -900.88.', response_metadata={'token_usage': {'completion_tokens': 44, 'prompt_tokens': 251, 'total_tokens': 295}, 'model_name': 'gpt-3.5-turbo-0125', 'system_fingerprint': 'fp_b28b39ffa8', 'finish_reason': 'stop', 'logprobs': None}, id='run-d1161669-ed09-4b18-94bd-6d8530df5aa8-0')]}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"graph.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(\n",
" \"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"\n",
" )\n",
" ]\n",
" }\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "073c074e-d722-42e0-85ec-c62c079207e4",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'messages': [HumanMessage(content=\"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"),\n",
" AIMessage(content=[{'text': \"Okay, let's break this down into two parts:\", 'type': 'text'}, {'id': 'toolu_01DEhqcXkXTtzJAiZ7uMBeDC', 'input': {'x': 3, 'y': 5}, 'name': 'add', 'type': 'tool_use'}], response_metadata={'id': 'msg_01AkLGH8sxMHaH15yewmjwkF', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 450, 'output_tokens': 81}}, id='run-f35bfae8-8ded-4f8a-831b-0940d6ad16b6-0', tool_calls=[{'name': 'add', 'args': {'x': 3, 'y': 5}, 'id': 'toolu_01DEhqcXkXTtzJAiZ7uMBeDC'}]),\n",
" ToolMessage(content='8.0', tool_call_id='toolu_01DEhqcXkXTtzJAiZ7uMBeDC'),\n",
" AIMessage(content=[{'id': 'toolu_013DyMLrvnrto33peAKMGMr1', 'input': {'x': 8.0, 'y': 2.743}, 'name': 'exponentiate', 'type': 'tool_use'}], response_metadata={'id': 'msg_015Fmp8aztwYcce2JDAFfce3', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 545, 'output_tokens': 75}}, id='run-48aaeeeb-a1e5-48fd-a57a-6c3da2907b47-0', tool_calls=[{'name': 'exponentiate', 'args': {'x': 8.0, 'y': 2.743}, 'id': 'toolu_013DyMLrvnrto33peAKMGMr1'}]),\n",
" ToolMessage(content='300.03770462067547', tool_call_id='toolu_013DyMLrvnrto33peAKMGMr1'),\n",
" AIMessage(content=[{'text': 'So 3 plus 5 raised to the 2.743 power is 300.04.\\n\\nFor the second part:', 'type': 'text'}, {'id': 'toolu_01UTmMrGTmLpPrPCF1rShN46', 'input': {'x': 17.24, 'y': -918.1241}, 'name': 'add', 'type': 'tool_use'}], response_metadata={'id': 'msg_015TkhfRBENPib2RWAxkieH6', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'tool_use', 'stop_sequence': None, 'usage': {'input_tokens': 638, 'output_tokens': 105}}, id='run-45fb62e3-d102-4159-881d-241c5dbadeed-0', tool_calls=[{'name': 'add', 'args': {'x': 17.24, 'y': -918.1241}, 'id': 'toolu_01UTmMrGTmLpPrPCF1rShN46'}]),\n",
" ToolMessage(content='-900.8841', tool_call_id='toolu_01UTmMrGTmLpPrPCF1rShN46'),\n",
" AIMessage(content='Therefore, 17.24 - 918.1241 = -900.8841', response_metadata={'id': 'msg_01LgKnRuUcSyADCpxv9tPoYD', 'model': 'claude-3-sonnet-20240229', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 759, 'output_tokens': 24}}, id='run-1008254e-ccd1-497c-8312-9550dd77bd08-0')]}"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"graph.invoke(\n",
" {\n",
" \"messages\": [\n",
" HumanMessage(\n",
" \"what's 3 plus 5 raised to the 2.743. also what's 17.24 - 918.1241\"\n",
" )\n",
" ]\n",
" },\n",
" config={\"configurable\": {\"llm\": \"claude3\"}},\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -3811,7 +3811,7 @@
"from langchain.chains import ConversationalRetrievalChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
"model = ChatOpenAI(model_name=\"gpt-3.5-turbo-0613\") # switch to 'gpt-4'\n",
"model = ChatOpenAI(model=\"gpt-3.5-turbo-0613\") # switch to 'gpt-4'\n",
"qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)"
]
},

View File

@@ -84,7 +84,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",
@@ -424,7 +424,7 @@
" DialogueAgentWithTools(\n",
" name=name,\n",
" system_message=SystemMessage(content=system_message),\n",
" model=ChatOpenAI(model_name=\"gpt-4\", temperature=0.2),\n",
" model=ChatOpenAI(model=\"gpt-4\", temperature=0.2),\n",
" tool_names=tools,\n",
" top_k_results=2,\n",
" )\n",

View File

@@ -70,7 +70,7 @@
" Applies the chatmodel to the message history\n",
" and returns the message string\n",
" \"\"\"\n",
" message = self.model(\n",
" message = self.model.invoke(\n",
" [\n",
" self.system_message,\n",
" HumanMessage(content=\"\\n\".join(self.message_history + [self.prefix])),\n",

View File

@@ -0,0 +1,174 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Video Captioning\n",
"This notebook shows how to use VideoCaptioningChain, which is implemented using Langchain's ImageCaptionLoader and AssemblyAI to produce .srt files.\n",
"\n",
"This system autogenerates both subtitles and closed captions from a video URL."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installing Dependencies"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"# !pip install ffmpeg-python\n",
"# !pip install assemblyai\n",
"# !pip install opencv-python\n",
"# !pip install torch\n",
"# !pip install pillow\n",
"# !pip install transformers\n",
"# !pip install langchain"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Imports"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"ExecuteTime": {
"end_time": "2023-11-30T03:39:14.078232Z",
"start_time": "2023-11-30T03:39:12.534410Z"
}
},
"outputs": [],
"source": [
"import getpass\n",
"\n",
"from langchain.chains.video_captioning import VideoCaptioningChain\n",
"from langchain.chat_models.openai import ChatOpenAI"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setting up API Keys"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2023-11-30T03:39:17.423806Z",
"start_time": "2023-11-30T03:39:17.417945Z"
}
},
"outputs": [],
"source": [
"OPENAI_API_KEY = getpass.getpass(\"OpenAI API Key:\")\n",
"\n",
"ASSEMBLYAI_API_KEY = getpass.getpass(\"AssemblyAI API Key:\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Required parameters:**\n",
"\n",
"* llm: The language model this chain will use to get suggestions on how to refine the closed-captions\n",
"* assemblyai_key: The API key for AssemblyAI, used to generate the subtitles\n",
"\n",
"**Optional Parameters:**\n",
"\n",
"* verbose (Default: True): Sets verbose mode for downstream chain calls\n",
"* use_logging (Default: True): Log the chain's processes in run manager\n",
"* frame_skip (Default: None): Choose how many video frames to skip during processing. Increasing it results in faster execution, but less accurate results. If None, frame skip is calculated manually based on the framerate Set this to 0 to sample all frames\n",
"* image_delta_threshold (Default: 3000000): Set the sensitivity for what the image processor considers a change in scenery in the video, used to delimit closed captions. Higher = less sensitive\n",
"* closed_caption_char_limit (Default: 20): Sets the character limit on closed captions\n",
"* closed_caption_similarity_threshold (Default: 80): Sets the percentage value to how similar two closed caption models should be in order to be clustered into one longer closed caption\n",
"* use_unclustered_video_models (Default: False): If true, closed captions that could not be clustered will be included. May result in spontaneous behaviour from closed captions such as very short lasting captions or fast-changing captions. Enabling this is experimental and not recommended"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Example run"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# https://ia804703.us.archive.org/27/items/uh-oh-here-we-go-again/Uh-Oh%2C%20Here%20we%20go%20again.mp4\n",
"# https://ia601200.us.archive.org/9/items/f58703d4-61e6-4f8f-8c08-b42c7e16f7cb/f58703d4-61e6-4f8f-8c08-b42c7e16f7cb.mp4\n",
"\n",
"chain = VideoCaptioningChain(\n",
" llm=ChatOpenAI(model=\"gpt-4\", max_tokens=4000, openai_api_key=OPENAI_API_KEY),\n",
" assemblyai_key=ASSEMBLYAI_API_KEY,\n",
")\n",
"\n",
"srt_content = chain.run(\n",
" video_file_path=\"https://ia601200.us.archive.org/9/items/f58703d4-61e6-4f8f-8c08-b42c7e16f7cb/f58703d4-61e6-4f8f-8c08-b42c7e16f7cb.mp4\"\n",
")\n",
"\n",
"print(srt_content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Writing output to .srt file"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"with open(\"output.srt\", \"w\") as file:\n",
" file.write(srt_content)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "myenv",
"language": "python",
"name": "myenv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
},
"vscode": {
"interpreter": {
"hash": "b0fa6594d8f4cbf19f97940f81e996739fb7646882a419484c72d19e05852a7e"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}

View File

@@ -601,7 +601,7 @@
"source": [
"from langchain_openai import ChatOpenAI\n",
"\n",
"llm = ChatOpenAI(model_name=\"gpt-4\", temperature=0)"
"llm = ChatOpenAI(model=\"gpt-4\", temperature=0)"
]
},
{

View File

@@ -4,14 +4,14 @@
# ATTENTION: When adding a service below use a non-standard port
# increment by one from the preceding port.
# For credentials always use `langchain` and `langchain` for the
# username and password.
# username and password.
version: "3"
name: langchain-tests
services:
redis:
image: redis/redis-stack-server:latest
# We use non standard ports since
# We use non standard ports since
# these instances are used for testing
# and users may already have existing
# redis instances set up locally
@@ -73,6 +73,11 @@ services:
retries: 60
volumes:
- postgres_data_pgvector:/var/lib/postgresql/data
vdms:
image: intellabs/vdms:latest
container_name: vdms_container
ports:
- "6025:55555"
volumes:
postgres_data:

1
docs/.gitignore vendored
View File

@@ -1,2 +1,3 @@
/.quarto/
src/supabase.d.ts
build

View File

@@ -1,24 +0,0 @@
#!/usr/bin/env bash
set -o errexit
set -o nounset
set -o pipefail
set -o xtrace
SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)"
cd "${SCRIPT_DIR}"
mkdir -p ../_dist
rsync -ruv --exclude node_modules --exclude api_reference --exclude .venv --exclude .docusaurus . ../_dist
cd ../_dist
poetry run python scripts/model_feat_table.py
cp ../cookbook/README.md src/pages/cookbook.mdx
mkdir -p docs/templates
cp ../templates/docs/INDEX.md docs/templates/index.md
poetry run python scripts/copy_templates.py
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O docs/langserve.md
wget -q https://raw.githubusercontent.com/langchain-ai/langgraph/main/README.md -O docs/langgraph.md
yarn
poetry run quarto preview docs

92
docs/Makefile Normal file
View File

@@ -0,0 +1,92 @@
# we build the docs in these stages:
# 1. install vercel and python dependencies
# 2. copy files from "source dir" to "intermediate dir"
# 2. generate files like model feat table, etc in "intermediate dir"
# 3. copy files to their right spots (e.g. langserve readme) in "intermediate dir"
# 4. build the docs from "intermediate dir" to "output dir"
SOURCE_DIR = docs/
INTERMEDIATE_DIR = build/intermediate/docs
OUTPUT_NEW_DIR = build/output-new
OUTPUT_NEW_DOCS_DIR = $(OUTPUT_NEW_DIR)/docs
PYTHON = .venv/bin/python
PARTNER_DEPS_LIST := $(shell find ../libs/partners -mindepth 1 -maxdepth 1 -type d -exec test -e "{}/pyproject.toml" \; -print | grep -vE "airbyte|ibm" | tr '\n' ' ')
PORT ?= 3001
clean:
rm -rf build
install-vercel-deps:
yum -y update
yum install gcc bzip2-devel libffi-devel zlib-devel wget tar gzip rsync -y
install-py-deps:
python3 -m venv .venv
$(PYTHON) -m pip install --upgrade pip
$(PYTHON) -m pip install --upgrade uv
$(PYTHON) -m uv pip install -r vercel_requirements.txt
$(PYTHON) -m uv pip install --editable $(PARTNER_DEPS_LIST)
_generate-files-internal:
mkdir -p $(INTERMEDIATE_DIR)
rsync -am --delete $(SOURCE_DIR) $(INTERMEDIATE_DIR)
generate-files: _generate-files-internal
mkdir -p $(INTERMEDIATE_DIR)/templates
$(PYTHON) scripts/model_feat_table.py $(INTERMEDIATE_DIR)
$(PYTHON) scripts/copy_templates.py $(INTERMEDIATE_DIR)
wget -q https://raw.githubusercontent.com/langchain-ai/langserve/main/README.md -O $(INTERMEDIATE_DIR)/langserve.md
$(PYTHON) scripts/resolve_local_links.py $(INTERMEDIATE_DIR)/langserve.md https://github.com/langchain-ai/langserve/tree/main/
copy-infra:
mkdir -p $(OUTPUT_NEW_DIR)
cp -r src $(OUTPUT_NEW_DIR)
cp vercel.json $(OUTPUT_NEW_DIR)
cp babel.config.js $(OUTPUT_NEW_DIR)
cp -r data $(OUTPUT_NEW_DIR)
cp docusaurus.config.js $(OUTPUT_NEW_DIR)
cp package.json $(OUTPUT_NEW_DIR)
cp sidebars.js $(OUTPUT_NEW_DIR)
cp -r static $(OUTPUT_NEW_DIR)
cp yarn.lock $(OUTPUT_NEW_DIR)
render:
$(PYTHON) scripts/notebook_convert.py $(INTERMEDIATE_DIR) $(OUTPUT_NEW_DOCS_DIR)
md-sync:
rsync -am --include="*/" --include="*.mdx" --include="*.md" --include="*.png" --exclude="*" $(INTERMEDIATE_DIR)/ $(OUTPUT_NEW_DOCS_DIR)
generate-references:
$(PYTHON) scripts/generate_api_reference_links.py --docs_dir $(OUTPUT_NEW_DOCS_DIR)
build: install-py-deps generate-files copy-infra render md-sync
generate-watch:
fswatch $(SOURCE_DIR) | xargs -I{} make _generate-files-internal
echo:
echo "hi"
echo x$(SOURCE_PATHS)x
build-watch:
fswatch $(INTERMEDIATE_DIR) | grep -E --line-buffered "(?:.md|.mdx|.ipynb)$$" | xargs -I{} make render md-sync copy-infra SOURCE_PATHS={}
vercel-build: install-vercel-deps build generate-references
rm -rf docs
mv $(OUTPUT_NEW_DOCS_DIR) docs
rm -rf build
yarn run docusaurus build
mv build v0.2
mkdir build
mv v0.2 build
mv build/v0.2/404.html build
start:
cd $(OUTPUT_NEW_DIR) && yarn && yarn start --port=$(PORT)

View File

@@ -1,3 +1,14 @@
# LangChain Documentation
For more information on contributing to our documentation, see the [Documentation Contributing Guide](https://python.langchain.com/docs/contributing/documentation)
## Makefile
The included Makefile coordinates all building of the documentation.
There are 4 main steps to building the documentation:
1. Installing dependencies
2. Copying and downloading files into `build/intermediate`
3. Rendering the files from `build/intermediate` to `build/output-new`
4. Building `build/output-new` in docusaurus

View File

@@ -12,7 +12,8 @@ pre {
}
}
#my-component-root *, #headlessui-portal-root * {
#my-component-root *,
#headlessui-portal-root * {
z-index: 10000;
}

View File

@@ -187,7 +187,7 @@ def _load_package_modules(
modules_by_namespace[top_namespace] = _module_members
except ImportError as e:
print(f"Error: Unable to import module '{namespace}' with error: {e}") # noqa: T201
print(f"Error: Unable to import module '{namespace}' with error: {e}")
return modules_by_namespace
@@ -359,9 +359,14 @@ def main(dirs: Optional[list] = None) -> None:
dirs = [
dir_
for dir_ in os.listdir(ROOT_DIR / "libs")
if dir_ not in ("cli", "partners")
if dir_ not in ("cli", "partners", "standard-tests")
]
dirs += [
dir_
for dir_ in os.listdir(ROOT_DIR / "libs" / "partners")
if os.path.isdir(ROOT_DIR / "libs" / "partners" / dir_)
and "pyproject.toml" in os.listdir(ROOT_DIR / "libs" / "partners" / dir_)
]
dirs += os.listdir(ROOT_DIR / "libs" / "partners")
for dir_ in dirs:
# Skip any hidden directories
# Some of these could be present by mistake in the code base

File diff suppressed because one or more lines are too long

View File

@@ -1398,3 +1398,20 @@ table.sk-sponsor-table td {
.highlight .vi { color: #bb60d5 } /* Name.Variable.Instance */
.highlight .vm { color: #bb60d5 } /* Name.Variable.Magic */
.highlight .il { color: #208050 } /* Literal.Number.Integer.Long */
/** Custom styles overriding certain values */
div.sk-sidebar-toc-wrapper {
width: unset;
overflow-x: auto;
}
div.sk-sidebar-toc-wrapper > [aria-label="rellinks"] {
position: sticky;
left: 0;
}
.navbar-nav .dropdown-menu {
max-height: 80vh;
overflow-y: auto;
}

View File

@@ -1,76 +0,0 @@
/* eslint-disable prefer-template */
/* eslint-disable no-param-reassign */
// eslint-disable-next-line import/no-extraneous-dependencies
const babel = require("@babel/core");
const path = require("path");
const fs = require("fs");
/**
*
* @param {string|Buffer} content Content of the resource file
* @param {object} [map] SourceMap data consumable by https://github.com/mozilla/source-map
* @param {any} [meta] Meta data, could be anything
*/
async function webpackLoader(content, map, meta) {
const cb = this.async();
if (!this.resourcePath.endsWith(".ts")) {
cb(null, JSON.stringify({ content, imports: [] }), map, meta);
return;
}
try {
const result = await babel.parseAsync(content, {
sourceType: "module",
filename: this.resourcePath,
});
const imports = [];
result.program.body.forEach((node) => {
if (node.type === "ImportDeclaration") {
const source = node.source.value;
if (!source.startsWith("langchain")) {
return;
}
node.specifiers.forEach((specifier) => {
if (specifier.type === "ImportSpecifier") {
const local = specifier.local.name;
const imported = specifier.imported.name;
imports.push({ local, imported, source });
} else {
throw new Error("Unsupported import type");
}
});
}
});
imports.forEach((imp) => {
const { imported, source } = imp;
const moduleName = source.split("/").slice(1).join("_");
const docsPath = path.resolve(__dirname, "docs", "api", moduleName);
const available = fs.readdirSync(docsPath, { withFileTypes: true });
const found = available.find(
(dirent) =>
dirent.isDirectory() &&
fs.existsSync(path.resolve(docsPath, dirent.name, imported + ".md"))
);
if (found) {
imp.docs =
"/" + path.join("docs", "api", moduleName, found.name, imported);
} else {
throw new Error(
`Could not find docs for ${source}.${imported} in docs/api/`
);
}
});
cb(null, JSON.stringify({ content, imports }), map, meta);
} catch (err) {
cb(err);
}
}
module.exports = webpackLoader;

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,873 @@
# arXiv
LangChain implements the latest research in the field of Natural Language Processing.
This page contains `arXiv` papers referenced in the LangChain Documentation, API Reference,
Templates, and Cookbooks.
## Summary
| arXiv id / Title | Authors | Published date 🔻 | LangChain Documentation|
|------------------|---------|-------------------|------------------------|
| `2402.03620v1` [Self-Discover: Large Language Models Self-Compose Reasoning Structures](http://arxiv.org/abs/2402.03620v1) | Pei Zhou, Jay Pujara, Xiang Ren, et al. | 2024-02-06 | `Cookbook:` [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
| `2401.18059v1` [RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval](http://arxiv.org/abs/2401.18059v1) | Parth Sarthi, Salman Abdullah, Aditi Tuli, et al. | 2024-01-31 | `Cookbook:` [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
| `2401.15884v2` [Corrective Retrieval Augmented Generation](http://arxiv.org/abs/2401.15884v2) | Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al. | 2024-01-29 | `Cookbook:` [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
| `2401.04088v1` [Mixtral of Experts](http://arxiv.org/abs/2401.04088v1) | Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al. | 2024-01-08 | `Cookbook:` [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
| `2312.06648v2` [Dense X Retrieval: What Retrieval Granularity Should We Use?](http://arxiv.org/abs/2312.06648v2) | Tong Chen, Hongwei Wang, Sihao Chen, et al. | 2023-12-11 | `Template:` [propositional-retrieval](https://python.langchain.com/docs/templates/propositional-retrieval)
| `2311.09210v1` [Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models](http://arxiv.org/abs/2311.09210v1) | Wenhao Yu, Hongming Zhang, Xiaoman Pan, et al. | 2023-11-15 | `Template:` [chain-of-note-wiki](https://python.langchain.com/docs/templates/chain-of-note-wiki)
| `2310.11511v1` [Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection](http://arxiv.org/abs/2310.11511v1) | Akari Asai, Zeqiu Wu, Yizhong Wang, et al. | 2023-10-17 | `Cookbook:` [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
| `2310.06117v2` [Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models](http://arxiv.org/abs/2310.06117v2) | Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al. | 2023-10-09 | `Template:` [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting), `Cookbook:` [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
| `2307.09288v2` [Llama 2: Open Foundation and Fine-Tuned Chat Models](http://arxiv.org/abs/2307.09288v2) | Hugo Touvron, Louis Martin, Kevin Stone, et al. | 2023-07-18 | `Cookbook:` [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
| `2305.14283v3` [Query Rewriting for Retrieval-Augmented Large Language Models](http://arxiv.org/abs/2305.14283v3) | Xinbei Ma, Yeyun Gong, Pengcheng He, et al. | 2023-05-23 | `Template:` [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read), `Cookbook:` [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
| `2305.08291v1` [Large Language Model Guided Tree-of-Thought](http://arxiv.org/abs/2305.08291v1) | Jieyi Long | 2023-05-15 | `API:` [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot), `Cookbook:` [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
| `2305.04091v3` [Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models](http://arxiv.org/abs/2305.04091v3) | Lei Wang, Wanyu Xu, Yihuai Lan, et al. | 2023-05-06 | `Cookbook:` [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
| `2304.08485v2` [Visual Instruction Tuning](http://arxiv.org/abs/2304.08485v2) | Haotian Liu, Chunyuan Li, Qingyang Wu, et al. | 2023-04-17 | `Cookbook:` [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb)
| `2304.03442v2` [Generative Agents: Interactive Simulacra of Human Behavior](http://arxiv.org/abs/2304.03442v2) | Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al. | 2023-04-07 | `Cookbook:` [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
| `2303.17760v2` [CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society](http://arxiv.org/abs/2303.17760v2) | Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al. | 2023-03-31 | `Cookbook:` [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
| `2303.17580v4` [HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face](http://arxiv.org/abs/2303.17580v4) | Yongliang Shen, Kaitao Song, Xu Tan, et al. | 2023-03-30 | `API:` [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents), `Cookbook:` [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
| `2303.08774v6` [GPT-4 Technical Report](http://arxiv.org/abs/2303.08774v6) | OpenAI, Josh Achiam, Steven Adler, et al. | 2023-03-15 | `Docs:` [docs/integrations/vectorstores/mongodb_atlas](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas)
| `2301.10226v4` [A Watermark for Large Language Models](http://arxiv.org/abs/2301.10226v4) | John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al. | 2023-01-24 | `API:` [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI)
| `2212.10496v1` [Precise Zero-Shot Dense Retrieval without Relevance Labels](http://arxiv.org/abs/2212.10496v1) | Luyu Gao, Xueguang Ma, Jimmy Lin, et al. | 2022-12-20 | `API:` [langchain.chains...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder), `Template:` [hyde](https://python.langchain.com/docs/templates/hyde), `Cookbook:` [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
| `2212.07425v3` [Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments](http://arxiv.org/abs/2212.07425v3) | Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al. | 2022-12-12 | `API:` [langchain_experimental.fallacy_removal](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.fallacy_removal)
| `2211.13892v2` [Complementary Explanations for Effective In-Context Learning](http://arxiv.org/abs/2211.13892v2) | Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al. | 2022-11-25 | `API:` [langchain_core.example_selectors...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
| `2211.10435v2` [PAL: Program-aided Language Models](http://arxiv.org/abs/2211.10435v2) | Luyu Gao, Aman Madaan, Shuyan Zhou, et al. | 2022-11-18 | `API:` [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain), `Cookbook:` [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
| `2209.10785v2` [Deep Lake: a Lakehouse for Deep Learning](http://arxiv.org/abs/2209.10785v2) | Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al. | 2022-09-22 | `Docs:` [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake)
| `2205.12654v1` [Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages](http://arxiv.org/abs/2205.12654v1) | Kevin Heffernan, Onur Çelebi, Holger Schwenk | 2022-05-25 | `API:` [langchain_community.embeddings...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
| `2204.00498v1` [Evaluating the Text-to-SQL Capabilities of Large Language Models](http://arxiv.org/abs/2204.00498v1) | Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau | 2022-03-15 | `API:` [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL), [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase)
| `2202.00666v5` [Locally Typical Sampling](http://arxiv.org/abs/2202.00666v5) | Clara Meister, Tiago Pimentel, Gian Wiher, et al. | 2022-02-01 | `API:` [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `2103.00020v1` [Learning Transferable Visual Models From Natural Language Supervision](http://arxiv.org/abs/2103.00020v1) | Alec Radford, Jong Wook Kim, Chris Hallacy, et al. | 2021-02-26 | `API:` [langchain_experimental.open_clip](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.open_clip)
| `1909.05858v2` [CTRL: A Conditional Transformer Language Model for Controllable Generation](http://arxiv.org/abs/1909.05858v2) | Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al. | 2019-09-11 | `API:` [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
| `1908.10084v1` [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](http://arxiv.org/abs/1908.10084v1) | Nils Reimers, Iryna Gurevych | 2019-08-27 | `Docs:` [docs/integrations/text_embedding/sentence_transformers](https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers)
## Self-Discover: Large Language Models Self-Compose Reasoning Structures
- **arXiv id:** 2402.03620v1
- **Title:** Self-Discover: Large Language Models Self-Compose Reasoning Structures
- **Authors:** Pei Zhou, Jay Pujara, Xiang Ren, et al.
- **Published Date:** 2024-02-06
- **URL:** http://arxiv.org/abs/2402.03620v1
- **LangChain:**
- **Cookbook:** [self-discover](https://github.com/langchain-ai/langchain/blob/master/cookbook/self-discover.ipynb)
**Abstract:** We introduce SELF-DISCOVER, a general framework for LLMs to self-discover the
task-intrinsic reasoning structures to tackle complex reasoning problems that
are challenging for typical prompting methods. Core to the framework is a
self-discovery process where LLMs select multiple atomic reasoning modules such
as critical thinking and step-by-step thinking, and compose them into an
explicit reasoning structure for LLMs to follow during decoding. SELF-DISCOVER
substantially improves GPT-4 and PaLM 2's performance on challenging reasoning
benchmarks such as BigBench-Hard, grounded agent reasoning, and MATH, by as
much as 32% compared to Chain of Thought (CoT). Furthermore, SELF-DISCOVER
outperforms inference-intensive methods such as CoT-Self-Consistency by more
than 20%, while requiring 10-40x fewer inference compute. Finally, we show that
the self-discovered reasoning structures are universally applicable across
model families: from PaLM 2-L to GPT-4, and from GPT-4 to Llama2, and share
commonalities with human reasoning patterns.
## RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
- **arXiv id:** 2401.18059v1
- **Title:** RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
- **Authors:** Parth Sarthi, Salman Abdullah, Aditi Tuli, et al.
- **Published Date:** 2024-01-31
- **URL:** http://arxiv.org/abs/2401.18059v1
- **LangChain:**
- **Cookbook:** [RAPTOR](https://github.com/langchain-ai/langchain/blob/master/cookbook/RAPTOR.ipynb)
**Abstract:** Retrieval-augmented language models can better adapt to changes in world
state and incorporate long-tail knowledge. However, most existing methods
retrieve only short contiguous chunks from a retrieval corpus, limiting
holistic understanding of the overall document context. We introduce the novel
approach of recursively embedding, clustering, and summarizing chunks of text,
constructing a tree with differing levels of summarization from the bottom up.
At inference time, our RAPTOR model retrieves from this tree, integrating
information across lengthy documents at different levels of abstraction.
Controlled experiments show that retrieval with recursive summaries offers
significant improvements over traditional retrieval-augmented LMs on several
tasks. On question-answering tasks that involve complex, multi-step reasoning,
we show state-of-the-art results; for example, by coupling RAPTOR retrieval
with the use of GPT-4, we can improve the best performance on the QuALITY
benchmark by 20% in absolute accuracy.
## Corrective Retrieval Augmented Generation
- **arXiv id:** 2401.15884v2
- **Title:** Corrective Retrieval Augmented Generation
- **Authors:** Shi-Qi Yan, Jia-Chen Gu, Yun Zhu, et al.
- **Published Date:** 2024-01-29
- **URL:** http://arxiv.org/abs/2401.15884v2
- **LangChain:**
- **Cookbook:** [langgraph_crag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_crag.ipynb)
**Abstract:** Large language models (LLMs) inevitably exhibit hallucinations since the
accuracy of generated texts cannot be secured solely by the parametric
knowledge they encapsulate. Although retrieval-augmented generation (RAG) is a
practicable complement to LLMs, it relies heavily on the relevance of retrieved
documents, raising concerns about how the model behaves if retrieval goes
wrong. To this end, we propose the Corrective Retrieval Augmented Generation
(CRAG) to improve the robustness of generation. Specifically, a lightweight
retrieval evaluator is designed to assess the overall quality of retrieved
documents for a query, returning a confidence degree based on which different
knowledge retrieval actions can be triggered. Since retrieval from static and
limited corpora can only return sub-optimal documents, large-scale web searches
are utilized as an extension for augmenting the retrieval results. Besides, a
decompose-then-recompose algorithm is designed for retrieved documents to
selectively focus on key information and filter out irrelevant information in
them. CRAG is plug-and-play and can be seamlessly coupled with various
RAG-based approaches. Experiments on four datasets covering short- and
long-form generation tasks show that CRAG can significantly improve the
performance of RAG-based approaches.
## Mixtral of Experts
- **arXiv id:** 2401.04088v1
- **Title:** Mixtral of Experts
- **Authors:** Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, et al.
- **Published Date:** 2024-01-08
- **URL:** http://arxiv.org/abs/2401.04088v1
- **LangChain:**
- **Cookbook:** [together_ai](https://github.com/langchain-ai/langchain/blob/master/cookbook/together_ai.ipynb)
**Abstract:** We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model.
Mixtral has the same architecture as Mistral 7B, with the difference that each
layer is composed of 8 feedforward blocks (i.e. experts). For every token, at
each layer, a router network selects two experts to process the current state
and combine their outputs. Even though each token only sees two experts, the
selected experts can be different at each timestep. As a result, each token has
access to 47B parameters, but only uses 13B active parameters during inference.
Mixtral was trained with a context size of 32k tokens and it outperforms or
matches Llama 2 70B and GPT-3.5 across all evaluated benchmarks. In particular,
Mixtral vastly outperforms Llama 2 70B on mathematics, code generation, and
multilingual benchmarks. We also provide a model fine-tuned to follow
instructions, Mixtral 8x7B - Instruct, that surpasses GPT-3.5 Turbo,
Claude-2.1, Gemini Pro, and Llama 2 70B - chat model on human benchmarks. Both
the base and instruct models are released under the Apache 2.0 license.
## Dense X Retrieval: What Retrieval Granularity Should We Use?
- **arXiv id:** 2312.06648v2
- **Title:** Dense X Retrieval: What Retrieval Granularity Should We Use?
- **Authors:** Tong Chen, Hongwei Wang, Sihao Chen, et al.
- **Published Date:** 2023-12-11
- **URL:** http://arxiv.org/abs/2312.06648v2
- **LangChain:**
- **Template:** [propositional-retrieval](https://python.langchain.com/docs/templates/propositional-retrieval)
**Abstract:** Dense retrieval has become a prominent method to obtain relevant context or
world knowledge in open-domain NLP tasks. When we use a learned dense retriever
on a retrieval corpus at inference time, an often-overlooked design choice is
the retrieval unit in which the corpus is indexed, e.g. document, passage, or
sentence. We discover that the retrieval unit choice significantly impacts the
performance of both retrieval and downstream tasks. Distinct from the typical
approach of using passages or sentences, we introduce a novel retrieval unit,
proposition, for dense retrieval. Propositions are defined as atomic
expressions within text, each encapsulating a distinct factoid and presented in
a concise, self-contained natural language format. We conduct an empirical
comparison of different retrieval granularity. Our results reveal that
proposition-based retrieval significantly outperforms traditional passage or
sentence-based methods in dense retrieval. Moreover, retrieval by proposition
also enhances the performance of downstream QA tasks, since the retrieved texts
are more condensed with question-relevant information, reducing the need for
lengthy input tokens and minimizing the inclusion of extraneous, irrelevant
information.
## Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models
- **arXiv id:** 2311.09210v1
- **Title:** Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models
- **Authors:** Wenhao Yu, Hongming Zhang, Xiaoman Pan, et al.
- **Published Date:** 2023-11-15
- **URL:** http://arxiv.org/abs/2311.09210v1
- **LangChain:**
- **Template:** [chain-of-note-wiki](https://python.langchain.com/docs/templates/chain-of-note-wiki)
**Abstract:** Retrieval-augmented language models (RALMs) represent a substantial
advancement in the capabilities of large language models, notably in reducing
factual hallucination by leveraging external knowledge sources. However, the
reliability of the retrieved information is not always guaranteed. The
retrieval of irrelevant data can lead to misguided responses, and potentially
causing the model to overlook its inherent knowledge, even when it possesses
adequate information to address the query. Moreover, standard RALMs often
struggle to assess whether they possess adequate knowledge, both intrinsic and
retrieved, to provide an accurate answer. In situations where knowledge is
lacking, these systems should ideally respond with "unknown" when the answer is
unattainable. In response to these challenges, we introduces Chain-of-Noting
(CoN), a novel approach aimed at improving the robustness of RALMs in facing
noisy, irrelevant documents and in handling unknown scenarios. The core idea of
CoN is to generate sequential reading notes for retrieved documents, enabling a
thorough evaluation of their relevance to the given question and integrating
this information to formulate the final answer. We employed ChatGPT to create
training data for CoN, which was subsequently trained on an LLaMa-2 7B model.
Our experiments across four open-domain QA benchmarks show that RALMs equipped
with CoN significantly outperform standard RALMs. Notably, CoN achieves an
average improvement of +7.9 in EM score given entirely noisy retrieved
documents and +10.5 in rejection rates for real-time questions that fall
outside the pre-training knowledge scope.
## Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
- **arXiv id:** 2310.11511v1
- **Title:** Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
- **Authors:** Akari Asai, Zeqiu Wu, Yizhong Wang, et al.
- **Published Date:** 2023-10-17
- **URL:** http://arxiv.org/abs/2310.11511v1
- **LangChain:**
- **Cookbook:** [langgraph_self_rag](https://github.com/langchain-ai/langchain/blob/master/cookbook/langgraph_self_rag.ipynb)
**Abstract:** Despite their remarkable capabilities, large language models (LLMs) often
produce responses containing factual inaccuracies due to their sole reliance on
the parametric knowledge they encapsulate. Retrieval-Augmented Generation
(RAG), an ad hoc approach that augments LMs with retrieval of relevant
knowledge, decreases such issues. However, indiscriminately retrieving and
incorporating a fixed number of retrieved passages, regardless of whether
retrieval is necessary, or passages are relevant, diminishes LM versatility or
can lead to unhelpful response generation. We introduce a new framework called
Self-Reflective Retrieval-Augmented Generation (Self-RAG) that enhances an LM's
quality and factuality through retrieval and self-reflection. Our framework
trains a single arbitrary LM that adaptively retrieves passages on-demand, and
generates and reflects on retrieved passages and its own generations using
special tokens, called reflection tokens. Generating reflection tokens makes
the LM controllable during the inference phase, enabling it to tailor its
behavior to diverse task requirements. Experiments show that Self-RAG (7B and
13B parameters) significantly outperforms state-of-the-art LLMs and
retrieval-augmented models on a diverse set of tasks. Specifically, Self-RAG
outperforms ChatGPT and retrieval-augmented Llama2-chat on Open-domain QA,
reasoning and fact verification tasks, and it shows significant gains in
improving factuality and citation accuracy for long-form generations relative
to these models.
## Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
- **arXiv id:** 2310.06117v2
- **Title:** Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
- **Authors:** Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, et al.
- **Published Date:** 2023-10-09
- **URL:** http://arxiv.org/abs/2310.06117v2
- **LangChain:**
- **Template:** [stepback-qa-prompting](https://python.langchain.com/docs/templates/stepback-qa-prompting)
- **Cookbook:** [stepback-qa](https://github.com/langchain-ai/langchain/blob/master/cookbook/stepback-qa.ipynb)
**Abstract:** We present Step-Back Prompting, a simple prompting technique that enables
LLMs to do abstractions to derive high-level concepts and first principles from
instances containing specific details. Using the concepts and principles to
guide reasoning, LLMs significantly improve their abilities in following a
correct reasoning path towards the solution. We conduct experiments of
Step-Back Prompting with PaLM-2L, GPT-4 and Llama2-70B models, and observe
substantial performance gains on various challenging reasoning-intensive tasks
including STEM, Knowledge QA, and Multi-Hop Reasoning. For instance, Step-Back
Prompting improves PaLM-2L performance on MMLU (Physics and Chemistry) by 7%
and 11% respectively, TimeQA by 27%, and MuSiQue by 7%.
## Llama 2: Open Foundation and Fine-Tuned Chat Models
- **arXiv id:** 2307.09288v2
- **Title:** Llama 2: Open Foundation and Fine-Tuned Chat Models
- **Authors:** Hugo Touvron, Louis Martin, Kevin Stone, et al.
- **Published Date:** 2023-07-18
- **URL:** http://arxiv.org/abs/2307.09288v2
- **LangChain:**
- **Cookbook:** [Semi_Structured_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_Structured_RAG.ipynb)
**Abstract:** In this work, we develop and release Llama 2, a collection of pretrained and
fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70
billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for
dialogue use cases. Our models outperform open-source chat models on most
benchmarks we tested, and based on our human evaluations for helpfulness and
safety, may be a suitable substitute for closed-source models. We provide a
detailed description of our approach to fine-tuning and safety improvements of
Llama 2-Chat in order to enable the community to build on our work and
contribute to the responsible development of LLMs.
## Query Rewriting for Retrieval-Augmented Large Language Models
- **arXiv id:** 2305.14283v3
- **Title:** Query Rewriting for Retrieval-Augmented Large Language Models
- **Authors:** Xinbei Ma, Yeyun Gong, Pengcheng He, et al.
- **Published Date:** 2023-05-23
- **URL:** http://arxiv.org/abs/2305.14283v3
- **LangChain:**
- **Template:** [rewrite-retrieve-read](https://python.langchain.com/docs/templates/rewrite-retrieve-read)
- **Cookbook:** [rewrite](https://github.com/langchain-ai/langchain/blob/master/cookbook/rewrite.ipynb)
**Abstract:** Large Language Models (LLMs) play powerful, black-box readers in the
retrieve-then-read pipeline, making remarkable progress in knowledge-intensive
tasks. This work introduces a new framework, Rewrite-Retrieve-Read instead of
the previous retrieve-then-read for the retrieval-augmented LLMs from the
perspective of the query rewriting. Unlike prior studies focusing on adapting
either the retriever or the reader, our approach pays attention to the
adaptation of the search query itself, for there is inevitably a gap between
the input text and the needed knowledge in retrieval. We first prompt an LLM to
generate the query, then use a web search engine to retrieve contexts.
Furthermore, to better align the query to the frozen modules, we propose a
trainable scheme for our pipeline. A small language model is adopted as a
trainable rewriter to cater to the black-box LLM reader. The rewriter is
trained using the feedback of the LLM reader by reinforcement learning.
Evaluation is conducted on downstream tasks, open-domain QA and multiple-choice
QA. Experiments results show consistent performance improvement, indicating
that our framework is proven effective and scalable, and brings a new framework
for retrieval-augmented LLM.
## Large Language Model Guided Tree-of-Thought
- **arXiv id:** 2305.08291v1
- **Title:** Large Language Model Guided Tree-of-Thought
- **Authors:** Jieyi Long
- **Published Date:** 2023-05-15
- **URL:** http://arxiv.org/abs/2305.08291v1
- **LangChain:**
- **API Reference:** [langchain_experimental.tot](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.tot)
- **Cookbook:** [tree_of_thought](https://github.com/langchain-ai/langchain/blob/master/cookbook/tree_of_thought.ipynb)
**Abstract:** In this paper, we introduce the Tree-of-Thought (ToT) framework, a novel
approach aimed at improving the problem-solving capabilities of auto-regressive
large language models (LLMs). The ToT technique is inspired by the human mind's
approach for solving complex reasoning tasks through trial and error. In this
process, the human mind explores the solution space through a tree-like thought
process, allowing for backtracking when necessary. To implement ToT as a
software system, we augment an LLM with additional modules including a prompter
agent, a checker module, a memory module, and a ToT controller. In order to
solve a given problem, these modules engage in a multi-round conversation with
the LLM. The memory module records the conversation and state history of the
problem solving process, which allows the system to backtrack to the previous
steps of the thought-process and explore other directions from there. To verify
the effectiveness of the proposed technique, we implemented a ToT-based solver
for the Sudoku Puzzle. Experimental results show that the ToT framework can
significantly increase the success rate of Sudoku puzzle solving. Our
implementation of the ToT-based Sudoku solver is available on GitHub:
\url{https://github.com/jieyilong/tree-of-thought-puzzle-solver}.
## Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
- **arXiv id:** 2305.04091v3
- **Title:** Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
- **Authors:** Lei Wang, Wanyu Xu, Yihuai Lan, et al.
- **Published Date:** 2023-05-06
- **URL:** http://arxiv.org/abs/2305.04091v3
- **LangChain:**
- **Cookbook:** [plan_and_execute_agent](https://github.com/langchain-ai/langchain/blob/master/cookbook/plan_and_execute_agent.ipynb)
**Abstract:** Large language models (LLMs) have recently been shown to deliver impressive
performance in various NLP tasks. To tackle multi-step reasoning tasks,
few-shot chain-of-thought (CoT) prompting includes a few manually crafted
step-by-step reasoning demonstrations which enable LLMs to explicitly generate
reasoning steps and improve their reasoning task accuracy. To eliminate the
manual effort, Zero-shot-CoT concatenates the target problem statement with
"Let's think step by step" as an input prompt to LLMs. Despite the success of
Zero-shot-CoT, it still suffers from three pitfalls: calculation errors,
missing-step errors, and semantic misunderstanding errors. To address the
missing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of
two components: first, devising a plan to divide the entire task into smaller
subtasks, and then carrying out the subtasks according to the plan. To address
the calculation errors and improve the quality of generated reasoning steps, we
extend PS prompting with more detailed instructions and derive PS+ prompting.
We evaluate our proposed prompting strategy on ten datasets across three
reasoning problems. The experimental results over GPT-3 show that our proposed
zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets
by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought
Prompting, and has comparable performance with 8-shot CoT prompting on the math
reasoning problem. The code can be found at
https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.
## Visual Instruction Tuning
- **arXiv id:** 2304.08485v2
- **Title:** Visual Instruction Tuning
- **Authors:** Haotian Liu, Chunyuan Li, Qingyang Wu, et al.
- **Published Date:** 2023-04-17
- **URL:** http://arxiv.org/abs/2304.08485v2
- **LangChain:**
- **Cookbook:** [Semi_structured_and_multi_modal_RAG](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_and_multi_modal_RAG.ipynb), [Semi_structured_multi_modal_RAG_LLaMA2](https://github.com/langchain-ai/langchain/blob/master/cookbook/Semi_structured_multi_modal_RAG_LLaMA2.ipynb)
**Abstract:** Instruction tuning large language models (LLMs) using machine-generated
instruction-following data has improved zero-shot capabilities on new tasks,
but the idea is less explored in the multimodal field. In this paper, we
present the first attempt to use language-only GPT-4 to generate multimodal
language-image instruction-following data. By instruction tuning on such
generated data, we introduce LLaVA: Large Language and Vision Assistant, an
end-to-end trained large multimodal model that connects a vision encoder and
LLM for general-purpose visual and language understanding.Our early experiments
show that LLaVA demonstrates impressive multimodel chat abilities, sometimes
exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and
yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal
instruction-following dataset. When fine-tuned on Science QA, the synergy of
LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make
GPT-4 generated visual instruction tuning data, our model and code base
publicly available.
## Generative Agents: Interactive Simulacra of Human Behavior
- **arXiv id:** 2304.03442v2
- **Title:** Generative Agents: Interactive Simulacra of Human Behavior
- **Authors:** Joon Sung Park, Joseph C. O'Brien, Carrie J. Cai, et al.
- **Published Date:** 2023-04-07
- **URL:** http://arxiv.org/abs/2304.03442v2
- **LangChain:**
- **Cookbook:** [multiagent_bidding](https://github.com/langchain-ai/langchain/blob/master/cookbook/multiagent_bidding.ipynb), [generative_agents_interactive_simulacra_of_human_behavior](https://github.com/langchain-ai/langchain/blob/master/cookbook/generative_agents_interactive_simulacra_of_human_behavior.ipynb)
**Abstract:** Believable proxies of human behavior can empower interactive applications
ranging from immersive environments to rehearsal spaces for interpersonal
communication to prototyping tools. In this paper, we introduce generative
agents--computational software agents that simulate believable human behavior.
Generative agents wake up, cook breakfast, and head to work; artists paint,
while authors write; they form opinions, notice each other, and initiate
conversations; they remember and reflect on days past as they plan the next
day. To enable generative agents, we describe an architecture that extends a
large language model to store a complete record of the agent's experiences
using natural language, synthesize those memories over time into higher-level
reflections, and retrieve them dynamically to plan behavior. We instantiate
generative agents to populate an interactive sandbox environment inspired by
The Sims, where end users can interact with a small town of twenty five agents
using natural language. In an evaluation, these generative agents produce
believable individual and emergent social behaviors: for example, starting with
only a single user-specified notion that one agent wants to throw a Valentine's
Day party, the agents autonomously spread invitations to the party over the
next two days, make new acquaintances, ask each other out on dates to the
party, and coordinate to show up for the party together at the right time. We
demonstrate through ablation that the components of our agent
architecture--observation, planning, and reflection--each contribute critically
to the believability of agent behavior. By fusing large language models with
computational, interactive agents, this work introduces architectural and
interaction patterns for enabling believable simulations of human behavior.
## CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
- **arXiv id:** 2303.17760v2
- **Title:** CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society
- **Authors:** Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, et al.
- **Published Date:** 2023-03-31
- **URL:** http://arxiv.org/abs/2303.17760v2
- **LangChain:**
- **Cookbook:** [camel_role_playing](https://github.com/langchain-ai/langchain/blob/master/cookbook/camel_role_playing.ipynb)
**Abstract:** The rapid advancement of chat-based language models has led to remarkable
progress in complex task-solving. However, their success heavily relies on
human input to guide the conversation, which can be challenging and
time-consuming. This paper explores the potential of building scalable
techniques to facilitate autonomous cooperation among communicative agents, and
provides insight into their "cognitive" processes. To address the challenges of
achieving autonomous cooperation, we propose a novel communicative agent
framework named role-playing. Our approach involves using inception prompting
to guide chat agents toward task completion while maintaining consistency with
human intentions. We showcase how role-playing can be used to generate
conversational data for studying the behaviors and capabilities of a society of
agents, providing a valuable resource for investigating conversational language
models. In particular, we conduct comprehensive studies on
instruction-following cooperation in multi-agent settings. Our contributions
include introducing a novel communicative agent framework, offering a scalable
approach for studying the cooperative behaviors and capabilities of multi-agent
systems, and open-sourcing our library to support research on communicative
agents and beyond: https://github.com/camel-ai/camel.
## HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face
- **arXiv id:** 2303.17580v4
- **Title:** HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face
- **Authors:** Yongliang Shen, Kaitao Song, Xu Tan, et al.
- **Published Date:** 2023-03-30
- **URL:** http://arxiv.org/abs/2303.17580v4
- **LangChain:**
- **API Reference:** [langchain_experimental.autonomous_agents](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.autonomous_agents)
- **Cookbook:** [hugginggpt](https://github.com/langchain-ai/langchain/blob/master/cookbook/hugginggpt.ipynb)
**Abstract:** Solving complicated AI tasks with different domains and modalities is a key
step toward artificial general intelligence. While there are numerous AI models
available for various domains and modalities, they cannot handle complicated AI
tasks autonomously. Considering large language models (LLMs) have exhibited
exceptional abilities in language understanding, generation, interaction, and
reasoning, we advocate that LLMs could act as a controller to manage existing
AI models to solve complicated AI tasks, with language serving as a generic
interface to empower this. Based on this philosophy, we present HuggingGPT, an
LLM-powered agent that leverages LLMs (e.g., ChatGPT) to connect various AI
models in machine learning communities (e.g., Hugging Face) to solve AI tasks.
Specifically, we use ChatGPT to conduct task planning when receiving a user
request, select models according to their function descriptions available in
Hugging Face, execute each subtask with the selected AI model, and summarize
the response according to the execution results. By leveraging the strong
language capability of ChatGPT and abundant AI models in Hugging Face,
HuggingGPT can tackle a wide range of sophisticated AI tasks spanning different
modalities and domains and achieve impressive results in language, vision,
speech, and other challenging tasks, which paves a new way towards the
realization of artificial general intelligence.
## GPT-4 Technical Report
- **arXiv id:** 2303.08774v6
- **Title:** GPT-4 Technical Report
- **Authors:** OpenAI, Josh Achiam, Steven Adler, et al.
- **Published Date:** 2023-03-15
- **URL:** http://arxiv.org/abs/2303.08774v6
- **LangChain:**
- **Documentation:** [docs/integrations/vectorstores/mongodb_atlas](https://python.langchain.com/docs/integrations/vectorstores/mongodb_atlas)
**Abstract:** We report the development of GPT-4, a large-scale, multimodal model which can
accept image and text inputs and produce text outputs. While less capable than
humans in many real-world scenarios, GPT-4 exhibits human-level performance on
various professional and academic benchmarks, including passing a simulated bar
exam with a score around the top 10% of test takers. GPT-4 is a
Transformer-based model pre-trained to predict the next token in a document.
The post-training alignment process results in improved performance on measures
of factuality and adherence to desired behavior. A core component of this
project was developing infrastructure and optimization methods that behave
predictably across a wide range of scales. This allowed us to accurately
predict some aspects of GPT-4's performance based on models trained with no
more than 1/1,000th the compute of GPT-4.
## A Watermark for Large Language Models
- **arXiv id:** 2301.10226v4
- **Title:** A Watermark for Large Language Models
- **Authors:** John Kirchenbauer, Jonas Geiping, Yuxin Wen, et al.
- **Published Date:** 2023-01-24
- **URL:** http://arxiv.org/abs/2301.10226v4
- **LangChain:**
- **API Reference:** [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...OCIModelDeploymentTGI](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI.html#langchain_community.llms.oci_data_science_model_deployment_endpoint.OCIModelDeploymentTGI)
**Abstract:** Potential harms of large language models can be mitigated by watermarking
model output, i.e., embedding signals into generated text that are invisible to
humans but algorithmically detectable from a short span of tokens. We propose a
watermarking framework for proprietary language models. The watermark can be
embedded with negligible impact on text quality, and can be detected using an
efficient open-source algorithm without access to the language model API or
parameters. The watermark works by selecting a randomized set of "green" tokens
before a word is generated, and then softly promoting use of green tokens
during sampling. We propose a statistical test for detecting the watermark with
interpretable p-values, and derive an information-theoretic framework for
analyzing the sensitivity of the watermark. We test the watermark using a
multi-billion parameter model from the Open Pretrained Transformer (OPT)
family, and discuss robustness and security.
## Precise Zero-Shot Dense Retrieval without Relevance Labels
- **arXiv id:** 2212.10496v1
- **Title:** Precise Zero-Shot Dense Retrieval without Relevance Labels
- **Authors:** Luyu Gao, Xueguang Ma, Jimmy Lin, et al.
- **Published Date:** 2022-12-20
- **URL:** http://arxiv.org/abs/2212.10496v1
- **LangChain:**
- **API Reference:** [langchain.chains...HypotheticalDocumentEmbedder](https://api.python.langchain.com/en/latest/chains/langchain.chains.hyde.base.HypotheticalDocumentEmbedder.html#langchain.chains.hyde.base.HypotheticalDocumentEmbedder)
- **Template:** [hyde](https://python.langchain.com/docs/templates/hyde)
- **Cookbook:** [hypothetical_document_embeddings](https://github.com/langchain-ai/langchain/blob/master/cookbook/hypothetical_document_embeddings.ipynb)
**Abstract:** While dense retrieval has been shown effective and efficient across tasks and
languages, it remains difficult to create effective fully zero-shot dense
retrieval systems when no relevance label is available. In this paper, we
recognize the difficulty of zero-shot learning and encoding relevance. Instead,
we propose to pivot through Hypothetical Document Embeddings~(HyDE). Given a
query, HyDE first zero-shot instructs an instruction-following language model
(e.g. InstructGPT) to generate a hypothetical document. The document captures
relevance patterns but is unreal and may contain false details. Then, an
unsupervised contrastively learned encoder~(e.g. Contriever) encodes the
document into an embedding vector. This vector identifies a neighborhood in the
corpus embedding space, where similar real documents are retrieved based on
vector similarity. This second step ground the generated document to the actual
corpus, with the encoder's dense bottleneck filtering out the incorrect
details. Our experiments show that HyDE significantly outperforms the
state-of-the-art unsupervised dense retriever Contriever and shows strong
performance comparable to fine-tuned retrievers, across various tasks (e.g. web
search, QA, fact verification) and languages~(e.g. sw, ko, ja).
## Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments
- **arXiv id:** 2212.07425v3
- **Title:** Robust and Explainable Identification of Logical Fallacies in Natural Language Arguments
- **Authors:** Zhivar Sourati, Vishnu Priya Prasanna Venkatesh, Darshan Deshpande, et al.
- **Published Date:** 2022-12-12
- **URL:** http://arxiv.org/abs/2212.07425v3
- **LangChain:**
- **API Reference:** [langchain_experimental.fallacy_removal](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.fallacy_removal)
**Abstract:** The spread of misinformation, propaganda, and flawed argumentation has been
amplified in the Internet era. Given the volume of data and the subtlety of
identifying violations of argumentation norms, supporting information analytics
tasks, like content moderation, with trustworthy methods that can identify
logical fallacies is essential. In this paper, we formalize prior theoretical
work on logical fallacies into a comprehensive three-stage evaluation framework
of detection, coarse-grained, and fine-grained classification. We adapt
existing evaluation datasets for each stage of the evaluation. We employ three
families of robust and explainable methods based on prototype reasoning,
instance-based reasoning, and knowledge injection. The methods combine language
models with background knowledge and explainable mechanisms. Moreover, we
address data sparsity with strategies for data augmentation and curriculum
learning. Our three-stage framework natively consolidates prior datasets and
methods from existing tasks, like propaganda detection, serving as an
overarching evaluation testbed. We extensively evaluate these methods on our
datasets, focusing on their robustness and explainability. Our results provide
insight into the strengths and weaknesses of the methods on different
components and fallacy classes, indicating that fallacy identification is a
challenging task that may require specialized forms of reasoning to capture
various classes. We share our open-source code and data on GitHub to support
further work on logical fallacy identification.
## Complementary Explanations for Effective In-Context Learning
- **arXiv id:** 2211.13892v2
- **Title:** Complementary Explanations for Effective In-Context Learning
- **Authors:** Xi Ye, Srinivasan Iyer, Asli Celikyilmaz, et al.
- **Published Date:** 2022-11-25
- **URL:** http://arxiv.org/abs/2211.13892v2
- **LangChain:**
- **API Reference:** [langchain_core.example_selectors...MaxMarginalRelevanceExampleSelector](https://api.python.langchain.com/en/latest/example_selectors/langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector.html#langchain_core.example_selectors.semantic_similarity.MaxMarginalRelevanceExampleSelector)
**Abstract:** Large language models (LLMs) have exhibited remarkable capabilities in
learning from explanations in prompts, but there has been limited understanding
of exactly how these explanations function or why they are effective. This work
aims to better understand the mechanisms by which explanations are used for
in-context learning. We first study the impact of two different factors on the
performance of prompts with explanations: the computation trace (the way the
solution is decomposed) and the natural language used to express the prompt. By
perturbing explanations on three controlled tasks, we show that both factors
contribute to the effectiveness of explanations. We further study how to form
maximally effective sets of explanations for solving a given test query. We
find that LLMs can benefit from the complementarity of the explanation set:
diverse reasoning skills shown by different exemplars can lead to better
performance. Therefore, we propose a maximal marginal relevance-based exemplar
selection approach for constructing exemplar sets that are both relevant as
well as complementary, which successfully improves the in-context learning
performance across three real-world tasks on multiple LLMs.
## PAL: Program-aided Language Models
- **arXiv id:** 2211.10435v2
- **Title:** PAL: Program-aided Language Models
- **Authors:** Luyu Gao, Aman Madaan, Shuyan Zhou, et al.
- **Published Date:** 2022-11-18
- **URL:** http://arxiv.org/abs/2211.10435v2
- **LangChain:**
- **API Reference:** [langchain_experimental.pal_chain](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.pal_chain), [langchain_experimental.pal_chain...PALChain](https://api.python.langchain.com/en/latest/pal_chain/langchain_experimental.pal_chain.base.PALChain.html#langchain_experimental.pal_chain.base.PALChain)
- **Cookbook:** [program_aided_language_model](https://github.com/langchain-ai/langchain/blob/master/cookbook/program_aided_language_model.ipynb)
**Abstract:** Large language models (LLMs) have recently demonstrated an impressive ability
to perform arithmetic and symbolic reasoning tasks, when provided with a few
examples at test time ("few-shot prompting"). Much of this success can be
attributed to prompting methods such as "chain-of-thought'', which employ LLMs
for both understanding the problem description by decomposing it into steps, as
well as solving each step of the problem. While LLMs seem to be adept at this
sort of step-by-step decomposition, LLMs often make logical and arithmetic
mistakes in the solution part, even when the problem is decomposed correctly.
In this paper, we present Program-Aided Language models (PAL): a novel approach
that uses the LLM to read natural language problems and generate programs as
the intermediate reasoning steps, but offloads the solution step to a runtime
such as a Python interpreter. With PAL, decomposing the natural language
problem into runnable steps remains the only learning task for the LLM, while
solving is delegated to the interpreter. We demonstrate this synergy between a
neural LLM and a symbolic interpreter across 13 mathematical, symbolic, and
algorithmic reasoning tasks from BIG-Bench Hard and other benchmarks. In all
these natural language reasoning tasks, generating code using an LLM and
reasoning using a Python interpreter leads to more accurate results than much
larger models. For example, PAL using Codex achieves state-of-the-art few-shot
accuracy on the GSM8K benchmark of math word problems, surpassing PaLM-540B
which uses chain-of-thought by absolute 15% top-1. Our code and data are
publicly available at http://reasonwithpal.com/ .
## Deep Lake: a Lakehouse for Deep Learning
- **arXiv id:** 2209.10785v2
- **Title:** Deep Lake: a Lakehouse for Deep Learning
- **Authors:** Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, et al.
- **Published Date:** 2022-09-22
- **URL:** http://arxiv.org/abs/2209.10785v2
- **LangChain:**
- **Documentation:** [docs/integrations/providers/activeloop_deeplake](https://python.langchain.com/docs/integrations/providers/activeloop_deeplake)
**Abstract:** Traditional data lakes provide critical data infrastructure for analytical
workloads by enabling time travel, running SQL queries, ingesting data with
ACID transactions, and visualizing petabyte-scale datasets on cloud storage.
They allow organizations to break down data silos, unlock data-driven
decision-making, improve operational efficiency, and reduce costs. However, as
deep learning usage increases, traditional data lakes are not well-designed for
applications such as natural language processing (NLP), audio processing,
computer vision, and applications involving non-tabular datasets. This paper
presents Deep Lake, an open-source lakehouse for deep learning applications
developed at Activeloop. Deep Lake maintains the benefits of a vanilla data
lake with one key difference: it stores complex data, such as images, videos,
annotations, as well as tabular data, in the form of tensors and rapidly
streams the data over the network to (a) Tensor Query Language, (b) in-browser
visualization engine, or (c) deep learning frameworks without sacrificing GPU
utilization. Datasets stored in Deep Lake can be accessed from PyTorch,
TensorFlow, JAX, and integrate with numerous MLOps tools.
## Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages
- **arXiv id:** 2205.12654v1
- **Title:** Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages
- **Authors:** Kevin Heffernan, Onur Çelebi, Holger Schwenk
- **Published Date:** 2022-05-25
- **URL:** http://arxiv.org/abs/2205.12654v1
- **LangChain:**
- **API Reference:** [langchain_community.embeddings...LaserEmbeddings](https://api.python.langchain.com/en/latest/embeddings/langchain_community.embeddings.laser.LaserEmbeddings.html#langchain_community.embeddings.laser.LaserEmbeddings)
**Abstract:** Scaling multilingual representation learning beyond the hundred most frequent
languages is challenging, in particular to cover the long tail of low-resource
languages. A promising approach has been to train one-for-all multilingual
models capable of cross-lingual transfer, but these models often suffer from
insufficient capacity and interference between unrelated languages. Instead, we
move away from this approach and focus on training multiple language (family)
specific representations, but most prominently enable all languages to still be
encoded in the same representational space. To achieve this, we focus on
teacher-student training, allowing all encoders to be mutually compatible for
bitext mining, and enabling fast learning of new languages. We introduce a new
teacher-student training scheme which combines supervised and self-supervised
training, allowing encoders to take advantage of monolingual training data,
which is valuable in the low-resource setting.
Our approach significantly outperforms the original LASER encoder. We study
very low-resource languages and handle 50 African languages, many of which are
not covered by any other model. For these languages, we train sentence
encoders, mine bitexts, and validate the bitexts by training NMT systems.
## Evaluating the Text-to-SQL Capabilities of Large Language Models
- **arXiv id:** 2204.00498v1
- **Title:** Evaluating the Text-to-SQL Capabilities of Large Language Models
- **Authors:** Nitarshan Rajkumar, Raymond Li, Dzmitry Bahdanau
- **Published Date:** 2022-03-15
- **URL:** http://arxiv.org/abs/2204.00498v1
- **LangChain:**
- **API Reference:** [langchain_community.utilities...SparkSQL](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.spark_sql.SparkSQL.html#langchain_community.utilities.spark_sql.SparkSQL), [langchain_community.utilities...SQLDatabase](https://api.python.langchain.com/en/latest/utilities/langchain_community.utilities.sql_database.SQLDatabase.html#langchain_community.utilities.sql_database.SQLDatabase)
**Abstract:** We perform an empirical evaluation of Text-to-SQL capabilities of the Codex
language model. We find that, without any finetuning, Codex is a strong
baseline on the Spider benchmark; we also analyze the failure modes of Codex in
this setting. Furthermore, we demonstrate on the GeoQuery and Scholar
benchmarks that a small number of in-domain examples provided in the prompt
enables Codex to perform better than state-of-the-art models finetuned on such
few-shot examples.
## Locally Typical Sampling
- **arXiv id:** 2202.00666v5
- **Title:** Locally Typical Sampling
- **Authors:** Clara Meister, Tiago Pimentel, Gian Wiher, et al.
- **Published Date:** 2022-02-01
- **URL:** http://arxiv.org/abs/2202.00666v5
- **LangChain:**
- **API Reference:** [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
**Abstract:** Today's probabilistic language generators fall short when it comes to
producing coherent and fluent text despite the fact that the underlying models
perform well under standard metrics, e.g., perplexity. This discrepancy has
puzzled the language generation community for the last few years. In this work,
we posit that the abstraction of natural language generation as a discrete
stochastic process--which allows for an information-theoretic analysis--can
provide new insights into the behavior of probabilistic language generators,
e.g., why high-probability texts can be dull or repetitive. Humans use language
as a means of communicating information, aiming to do so in a simultaneously
efficient and error-minimizing manner; in fact, psycholinguistics research
suggests humans choose each word in a string with this subconscious goal in
mind. We formally define the set of strings that meet this criterion: those for
which each word has an information content close to the expected information
content, i.e., the conditional entropy of our model. We then propose a simple
and efficient procedure for enforcing this criterion when generating from
probabilistic models, which we call locally typical sampling. Automatic and
human evaluations show that, in comparison to nucleus and top-k sampling,
locally typical sampling offers competitive performance (in both abstractive
summarization and story generation) in terms of quality while consistently
reducing degenerate repetitions.
## Learning Transferable Visual Models From Natural Language Supervision
- **arXiv id:** 2103.00020v1
- **Title:** Learning Transferable Visual Models From Natural Language Supervision
- **Authors:** Alec Radford, Jong Wook Kim, Chris Hallacy, et al.
- **Published Date:** 2021-02-26
- **URL:** http://arxiv.org/abs/2103.00020v1
- **LangChain:**
- **API Reference:** [langchain_experimental.open_clip](https://api.python.langchain.com/en/latest/experimental_api_reference.html#module-langchain_experimental.open_clip)
**Abstract:** State-of-the-art computer vision systems are trained to predict a fixed set
of predetermined object categories. This restricted form of supervision limits
their generality and usability since additional labeled data is needed to
specify any other visual concept. Learning directly from raw text about images
is a promising alternative which leverages a much broader source of
supervision. We demonstrate that the simple pre-training task of predicting
which caption goes with which image is an efficient and scalable way to learn
SOTA image representations from scratch on a dataset of 400 million (image,
text) pairs collected from the internet. After pre-training, natural language
is used to reference learned visual concepts (or describe new ones) enabling
zero-shot transfer of the model to downstream tasks. We study the performance
of this approach by benchmarking on over 30 different existing computer vision
datasets, spanning tasks such as OCR, action recognition in videos,
geo-localization, and many types of fine-grained object classification. The
model transfers non-trivially to most tasks and is often competitive with a
fully supervised baseline without the need for any dataset specific training.
For instance, we match the accuracy of the original ResNet-50 on ImageNet
zero-shot without needing to use any of the 1.28 million training examples it
was trained on. We release our code and pre-trained model weights at
https://github.com/OpenAI/CLIP.
## CTRL: A Conditional Transformer Language Model for Controllable Generation
- **arXiv id:** 1909.05858v2
- **Title:** CTRL: A Conditional Transformer Language Model for Controllable Generation
- **Authors:** Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, et al.
- **Published Date:** 2019-09-11
- **URL:** http://arxiv.org/abs/1909.05858v2
- **LangChain:**
- **API Reference:** [langchain_community.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_community.llms.huggingface_endpoint.HuggingFaceEndpoint), [langchain_community.llms...HuggingFaceTextGenInference](https://api.python.langchain.com/en/latest/llms/langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html#langchain_community.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference), [langchain_huggingface.llms...HuggingFaceEndpoint](https://api.python.langchain.com/en/latest/llms/langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint.html#langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint)
**Abstract:** Large-scale language models show promising text generation capabilities, but
users cannot easily control particular aspects of the generated text. We
release CTRL, a 1.63 billion-parameter conditional transformer language model,
trained to condition on control codes that govern style, content, and
task-specific behavior. Control codes were derived from structure that
naturally co-occurs with raw text, preserving the advantages of unsupervised
learning while providing more explicit control over text generation. These
codes also allow CTRL to predict which parts of the training data are most
likely given a sequence. This provides a potential method for analyzing large
amounts of data via model-based source attribution. We have released multiple
full-sized, pretrained versions of CTRL at https://github.com/salesforce/ctrl.
## Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
- **arXiv id:** 1908.10084v1
- **Title:** Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
- **Authors:** Nils Reimers, Iryna Gurevych
- **Published Date:** 2019-08-27
- **URL:** http://arxiv.org/abs/1908.10084v1
- **LangChain:**
- **Documentation:** [docs/integrations/text_embedding/sentence_transformers](https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers)
**Abstract:** BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new
state-of-the-art performance on sentence-pair regression tasks like semantic
textual similarity (STS). However, it requires that both sentences are fed into
the network, which causes a massive computational overhead: Finding the most
similar pair in a collection of 10,000 sentences requires about 50 million
inference computations (~65 hours) with BERT. The construction of BERT makes it
unsuitable for semantic similarity search as well as for unsupervised tasks
like clustering.
In this publication, we present Sentence-BERT (SBERT), a modification of the
pretrained BERT network that use siamese and triplet network structures to
derive semantically meaningful sentence embeddings that can be compared using
cosine-similarity. This reduces the effort for finding the most similar pair
from 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while
maintaining the accuracy from BERT.
We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning
tasks, where it outperforms other state-of-the-art sentence embeddings methods.

View File

@@ -241,7 +241,6 @@ Dependents stats for `langchain-ai/langchain`
|[alejandro-ao/langchain-ask-pdf](https://github.com/alejandro-ao/langchain-ask-pdf) | 514 |
|[sajjadium/ctf-archives](https://github.com/sajjadium/ctf-archives) | 507 |
|[continuum-llms/chatgpt-memory](https://github.com/continuum-llms/chatgpt-memory) | 502 |
|[llmOS/opencopilot](https://github.com/llmOS/opencopilot) | 495 |
|[steamship-core/steamship-langchain](https://github.com/steamship-core/steamship-langchain) | 494 |
|[mpaepper/content-chatbot](https://github.com/mpaepper/content-chatbot) | 493 |
|[langchain-ai/langchain-aiplugin](https://github.com/langchain-ai/langchain-aiplugin) | 492 |
@@ -455,7 +454,6 @@ Dependents stats for `langchain-ai/langchain`
|[Teahouse-Studios/akari-bot](https://github.com/Teahouse-Studios/akari-bot) | 149 |
|[realminchoi/babyagi-ui](https://github.com/realminchoi/babyagi-ui) | 148 |
|[ssheng/BentoChain](https://github.com/ssheng/BentoChain) | 148 |
|[lmstudio-ai/examples](https://github.com/lmstudio-ai/examples) | 147 |
|[solana-labs/chatgpt-plugin](https://github.com/solana-labs/chatgpt-plugin) | 147 |
|[aurelio-labs/arxiv-bot](https://github.com/aurelio-labs/arxiv-bot) | 147 |
|[Jaseci-Labs/jaseci](https://github.com/Jaseci-Labs/jaseci) | 146 |

View File

@@ -1,14 +1,10 @@
# Tutorials
## Books and Handbooks
- [Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
- [LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
- [LangChain Cheatsheet](https://pub.towardsai.net/langchain-cheatsheet-all-secrets-on-a-single-page-8be26b721cde) by **Ivan Reznikov**
# 3rd Party Tutorials
## Tutorials
### [LangChain v 0.1 by LangChain.ai](https://www.youtube.com/playlist?list=PLfaIDFEXuae0gBSJ9T0w7cu7iJZbH3T31)
### [Build with Langchain - Advanced by LangChain.ai](https://www.youtube.com/playlist?list=PLfaIDFEXuae06tclDATrMYY0idsTdLg9v)
### [LangGraph by LangChain.ai](https://www.youtube.com/playlist?list=PLfaIDFEXuae16n2TWUkKq5PgJ0w6Pkwtg)
### [by Greg Kamradt](https://www.youtube.com/playlist?list=PLqZXAkvF1bPNQER9mLmDbntNfSpzdDIU5)
### [by Sam Witteveen](https://www.youtube.com/playlist?list=PL8motc6AQftk1Bs42EW45kwYbyJ4jOdiZ)
### [by James Briggs](https://www.youtube.com/playlist?list=PLIUOU7oqGTLieV9uTIFMm6_4PXg-hlN6F)
@@ -16,25 +12,26 @@
### [by Mayo Oshin](https://www.youtube.com/@chatwithdata/search?query=langchain)
### [by 1 little Coder](https://www.youtube.com/playlist?list=PLpdmBGJ6ELUK-v0MK-t4wZmVEbxM5xk6L)
## Courses
### Featured courses on Deeplearning.AI
- [LangChain for LLM Application Development](https://learn.deeplearning.ai/langchain)
- [LangChain Chat with Your Data](https://learn.deeplearning.ai/langchain-chat-with-your-data)
- [Functions, Tools and Agents with LangChain](https://learn.deeplearning.ai/functions-tools-agents-langchain)
- [Build LLM Apps with LangChain.js](https://learn.deeplearning.ai/courses/build-llm-apps-with-langchain-js)
- [LangChain for LLM Application Development](https://www.deeplearning.ai/short-courses/langchain-for-llm-application-development/)
- [LangChain Chat with Your Data](https://www.deeplearning.ai/short-courses/langchain-chat-with-your-data/)
- [Functions, Tools and Agents with LangChain](https://www.deeplearning.ai/short-courses/functions-tools-agents-langchain/)
- [Build LLM Apps with LangChain.js](https://www.deeplearning.ai/short-courses/build-llm-apps-with-langchain-js/)
### Online courses
- [Udemy](https://www.udemy.com/courses/search/?q=langchain)
- [DataCamp](https://www.datacamp.com/courses/developing-llm-applications-with-langchain)
- [Pluralsight](https://www.pluralsight.com/search?q=langchain)
- [Coursera](https://www.coursera.org/search?query=langchain)
- [Maven](https://maven.com/courses?query=langchain)
- [Udacity](https://www.udacity.com/catalog/all/any-price/any-school/any-skill/any-difficulty/any-duration/any-type/relevance/page-1?searchValue=langchain)
- [LinkedIn Learning](https://www.linkedin.com/search/results/learning/?keywords=langchain)
- [edX](https://www.edx.org/search?q=langchain)
- [freeCodeCamp](https://www.youtube.com/@freecodecamp/search?query=langchain)
## Short Tutorials
@@ -43,7 +40,11 @@
- [by Rabbitmetrics](https://youtu.be/aywZrzNaKjs)
- [by Ivan Reznikov](https://medium.com/@ivanreznikov/langchain-101-course-updated-668f7b41d6cb)
## [Documentation: Use cases](/docs/use_cases)
## Books and Handbooks
- [Generative AI with LangChain](https://www.amazon.com/Generative-AI-LangChain-language-ChatGPT/dp/1835083463/ref=sr_1_1?crid=1GMOMH0G7GLR&keywords=generative+ai+with+langchain&qid=1703247181&sprefix=%2Caps%2C298&sr=8-1) by [Ben Auffrath](https://www.amazon.com/stores/Ben-Auffarth/author/B08JQKSZ7D?ref=ap_rdr&store_ref=ap_rdr&isDramIntegrated=true&shoppingPortalEnabled=true), ©️ 2023 Packt Publishing
- [LangChain AI Handbook](https://www.pinecone.io/learn/langchain/) By **James Briggs** and **Francisco Ingham**
- [LangChain Cheatsheet](https://pub.towardsai.net/langchain-cheatsheet-all-secrets-on-a-single-page-8be26b721cde) by **Ivan Reznikov**
---------------------

View File

@@ -1,137 +1,63 @@
# YouTube videos
⛓ icon marks a new addition [last update 2023-09-21]
[Updated 2024-05-16]
### [Official LangChain YouTube channel](https://www.youtube.com/@LangChain)
### Introduction to LangChain with Harrison Chase, creator of LangChain
- [Building the Future with LLMs, `LangChain`, & `Pinecone`](https://youtu.be/nMniwlGyX-c) by [Pinecone](https://www.youtube.com/@pinecone-io)
- [LangChain and Weaviate with Harrison Chase and Bob van Luijt - Weaviate Podcast #36](https://youtu.be/lhby7Ql7hbk) by [Weaviate • Vector Database](https://www.youtube.com/@Weaviate)
- [LangChain Demo + Q&A with Harrison Chase](https://youtu.be/zaYTXQFR0_s?t=788) by [Full Stack Deep Learning](https://www.youtube.com/@FullStackDeepLearning)
- [LangChain Agents: Build Personal Assistants For Your Data (Q&A with Harrison Chase and Mayo Oshin)](https://youtu.be/gVkF8cwfBLI) by [Chat with data](https://www.youtube.com/@chatwithdata)
### [Tutorials on YouTube](/docs/additional_resources/tutorials/#tutorials)
## Videos (sorted by views)
- [Using `ChatGPT` with YOUR OWN Data. This is magical. (LangChain OpenAI API)](https://youtu.be/9AXP7tCI9PI) by [TechLead](https://www.youtube.com/@TechLead)
- [First look - `ChatGPT` + `WolframAlpha` (`GPT-3.5` and Wolfram|Alpha via LangChain by James Weaver)](https://youtu.be/wYGbY811oMo) by [Dr Alan D. Thompson](https://www.youtube.com/@DrAlanDThompson)
- [LangChain explained - The hottest new Python framework](https://youtu.be/RoR4XJw8wIc) by [AssemblyAI](https://www.youtube.com/@AssemblyAI)
- [Chatbot with INFINITE MEMORY using `OpenAI` & `Pinecone` - `GPT-3`, `Embeddings`, `ADA`, `Vector DB`, `Semantic`](https://youtu.be/2xNzB7xq8nk) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
- [LangChain for LLMs is... basically just an Ansible playbook](https://youtu.be/X51N9C-OhlE) by [David Shapiro ~ AI](https://www.youtube.com/@DavidShapiroAutomator)
- [Build your own LLM Apps with LangChain & `GPT-Index`](https://youtu.be/-75p09zFUJY) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [`BabyAGI` - New System of Autonomous AI Agents with LangChain](https://youtu.be/lg3kJvf1kXo) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [Run `BabyAGI` with Langchain Agents (with Python Code)](https://youtu.be/WosPGHPObx8) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- [How to Use Langchain With `Zapier` | Write and Send Email with GPT-3 | OpenAI API Tutorial](https://youtu.be/p9v2-xEa9A0) by [StarMorph AI](https://www.youtube.com/@starmorph)
- [Use Your Locally Stored Files To Get Response From GPT - `OpenAI` | Langchain | Python](https://youtu.be/NC1Ni9KS-rk) by [Shweta Lodha](https://www.youtube.com/@shweta-lodha)
- [`Langchain JS` | How to Use GPT-3, GPT-4 to Reference your own Data | `OpenAI Embeddings` Intro](https://youtu.be/veV2I-NEjaM) by [StarMorph AI](https://www.youtube.com/@starmorph)
- [The easiest way to work with large language models | Learn LangChain in 10min](https://youtu.be/kmbS6FDQh7c) by [Sophia Yang](https://www.youtube.com/@SophiaYangDS)
- [4 Autonomous AI Agents: “Westworld” simulation `BabyAGI`, `AutoGPT`, `Camel`, `LangChain`](https://youtu.be/yWbnH6inT_U) by [Sophia Yang](https://www.youtube.com/@SophiaYangDS)
- [AI CAN SEARCH THE INTERNET? Langchain Agents + OpenAI ChatGPT](https://youtu.be/J-GL0htqda8) by [tylerwhatsgood](https://www.youtube.com/@tylerwhatsgood)
- [Query Your Data with GPT-4 | Embeddings, Vector Databases | Langchain JS Knowledgebase](https://youtu.be/jRnUPUTkZmU) by [StarMorph AI](https://www.youtube.com/@starmorph)
- [`Weaviate` + LangChain for LLM apps presented by Erika Cardenas](https://youtu.be/7AGj4Td5Lgw) by [`Weaviate` • Vector Database](https://www.youtube.com/@Weaviate)
- [Langchain Overview — How to Use Langchain & `ChatGPT`](https://youtu.be/oYVYIq0lOtI) by [Python In Office](https://www.youtube.com/@pythoninoffice6568)
- [Langchain Overview - How to Use Langchain & `ChatGPT`](https://youtu.be/oYVYIq0lOtI) by [Python In Office](https://www.youtube.com/@pythoninoffice6568)
- [LangChain Tutorials](https://www.youtube.com/watch?v=FuqdVNB_8c0&list=PL9V0lbeJ69brU-ojMpU1Y7Ic58Tap0Cw6) by [Edrick](https://www.youtube.com/@edrickdch):
- [LangChain, Chroma DB, OpenAI Beginner Guide | ChatGPT with your PDF](https://youtu.be/FuqdVNB_8c0)
- [LangChain 101: The Complete Beginner's Guide](https://youtu.be/P3MAbZ2eMUI)
- [Custom langchain Agent & Tools with memory. Turn any `Python function` into langchain tool with Gpt 3](https://youtu.be/NIG8lXk0ULg) by [echohive](https://www.youtube.com/@echohive)
- [Building AI LLM Apps with LangChain (and more?) - LIVE STREAM](https://www.youtube.com/live/M-2Cj_2fzWI?feature=share) by [Nicholas Renotte](https://www.youtube.com/@NicholasRenotte)
- [`ChatGPT` with any `YouTube` video using langchain and `chromadb`](https://youtu.be/TQZfB2bzVwU) by [echohive](https://www.youtube.com/@echohive)
- [How to Talk to a `PDF` using LangChain and `ChatGPT`](https://youtu.be/v2i1YDtrIwk) by [Automata Learning Lab](https://www.youtube.com/@automatalearninglab)
- [Langchain Document Loaders Part 1: Unstructured Files](https://youtu.be/O5C0wfsen98) by [Merk](https://www.youtube.com/@merksworld)
- [LangChain - Prompt Templates (what all the best prompt engineers use)](https://youtu.be/1aRu8b0XNOQ) by [Nick Daigler](https://www.youtube.com/@nick_daigs)
- [LangChain. Crear aplicaciones Python impulsadas por GPT](https://youtu.be/DkW_rDndts8) by [Jesús Conde](https://www.youtube.com/@0utKast)
- [Easiest Way to Use GPT In Your Products | LangChain Basics Tutorial](https://youtu.be/fLy0VenZyGc) by [Rachel Woods](https://www.youtube.com/@therachelwoods)
- [`BabyAGI` + `GPT-4` Langchain Agent with Internet Access](https://youtu.be/wx1z_hs5P6E) by [tylerwhatsgood](https://www.youtube.com/@tylerwhatsgood)
- [Learning LLM Agents. How does it actually work? LangChain, AutoGPT & OpenAI](https://youtu.be/mb_YAABSplk) by [Arnoldas Kemeklis](https://www.youtube.com/@processusAI)
- [Get Started with LangChain in `Node.js`](https://youtu.be/Wxx1KUWJFv4) by [Developers Digest](https://www.youtube.com/@DevelopersDigest)
- [LangChain + `OpenAI` tutorial: Building a Q&A system w/ own text data](https://youtu.be/DYOU_Z0hAwo) by [Samuel Chan](https://www.youtube.com/@SamuelChan)
- [Langchain + `Zapier` Agent](https://youtu.be/yribLAb-pxA) by [Merk](https://www.youtube.com/@merksworld)
- [Connecting the Internet with `ChatGPT` (LLMs) using Langchain And Answers Your Questions](https://youtu.be/9Y0TBC63yZg) by [Kamalraj M M](https://www.youtube.com/@insightbuilder)
- [Build More Powerful LLM Applications for Businesss with LangChain (Beginners Guide)](https://youtu.be/sp3-WLKEcBg) by[ No Code Blackbox](https://www.youtube.com/@nocodeblackbox)
- [LangFlow LLM Agent Demo for 🦜🔗LangChain](https://youtu.be/zJxDHaWt-6o) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
- [Chatbot Factory: Streamline Python Chatbot Creation with LLMs and Langchain](https://youtu.be/eYer3uzrcuM) by [Finxter](https://www.youtube.com/@CobusGreylingZA)
- [LangChain Tutorial - ChatGPT mit eigenen Daten](https://youtu.be/0XDLyY90E2c) by [Coding Crashkurse](https://www.youtube.com/@codingcrashkurse6429)
- [Chat with a `CSV` | LangChain Agents Tutorial (Beginners)](https://youtu.be/tjeti5vXWOU) by [GoDataProf](https://www.youtube.com/@godataprof)
- [Introdução ao Langchain - #Cortes - Live DataHackers](https://youtu.be/fw8y5VRei5Y) by [Prof. João Gabriel Lima](https://www.youtube.com/@profjoaogabriellima)
- [LangChain: Level up `ChatGPT` !? | LangChain Tutorial Part 1](https://youtu.be/vxUGx8aZpDE) by [Code Affinity](https://www.youtube.com/@codeaffinitydev)
- [KI schreibt krasses Youtube Skript 😲😳 | LangChain Tutorial Deutsch](https://youtu.be/QpTiXyK1jus) by [SimpleKI](https://www.youtube.com/@simpleki)
- [Chat with Audio: Langchain, `Chroma DB`, OpenAI, and `Assembly AI`](https://youtu.be/Kjy7cx1r75g) by [AI Anytime](https://www.youtube.com/@AIAnytime)
- [QA over documents with Auto vector index selection with Langchain router chains](https://youtu.be/9G05qybShv8) by [echohive](https://www.youtube.com/@echohive)
- [Build your own custom LLM application with `Bubble.io` & Langchain (No Code & Beginner friendly)](https://youtu.be/O7NhQGu1m6c) by [No Code Blackbox](https://www.youtube.com/@nocodeblackbox)
- [Simple App to Question Your Docs: Leveraging `Streamlit`, `Hugging Face Spaces`, LangChain, and `Claude`!](https://youtu.be/X4YbNECRr7o) by [Chris Alexiuk](https://www.youtube.com/@chrisalexiuk)
- [LANGCHAIN AI- `ConstitutionalChainAI` + Databutton AI ASSISTANT Web App](https://youtu.be/5zIU6_rdJCU) by [Avra](https://www.youtube.com/@Avra_b)
- [LANGCHAIN AI AUTONOMOUS AGENT WEB APP - 👶 `BABY AGI` 🤖 with EMAIL AUTOMATION using `DATABUTTON`](https://youtu.be/cvAwOGfeHgw) by [Avra](https://www.youtube.com/@Avra_b)
- [The Future of Data Analysis: Using A.I. Models in Data Analysis (LangChain)](https://youtu.be/v_LIcVyg5dk) by [Absent Data](https://www.youtube.com/@absentdata)
- [Memory in LangChain | Deep dive (python)](https://youtu.be/70lqvTFh_Yg) by [Eden Marco](https://www.youtube.com/@EdenMarco)
- [9 LangChain UseCases | Beginner's Guide | 2023](https://youtu.be/zS8_qosHNMw) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
- [Use Large Language Models in Jupyter Notebook | LangChain | Agents & Indexes](https://youtu.be/JSe11L1a_QQ) by [Abhinaw Tiwari](https://www.youtube.com/@AbhinawTiwariAT)
- [How to Talk to Your Langchain Agent | `11 Labs` + `Whisper`](https://youtu.be/N4k459Zw2PU) by [VRSEN](https://www.youtube.com/@vrsen)
- [LangChain Deep Dive: 5 FUN AI App Ideas To Build Quickly and Easily](https://youtu.be/mPYEPzLkeks) by [James NoCode](https://www.youtube.com/@jamesnocode)
- [LangChain 101: Models](https://youtu.be/T6c_XsyaNSQ) by [Mckay Wrigley](https://www.youtube.com/@realmckaywrigley)
- [LangChain with JavaScript Tutorial #1 | Setup & Using LLMs](https://youtu.be/W3AoeMrg27o) by [Leon van Zyl](https://www.youtube.com/@leonvanzyl)
- [LangChain Overview & Tutorial for Beginners: Build Powerful AI Apps Quickly & Easily (ZERO CODE)](https://youtu.be/iI84yym473Q) by [James NoCode](https://www.youtube.com/@jamesnocode)
- [LangChain In Action: Real-World Use Case With Step-by-Step Tutorial](https://youtu.be/UO699Szp82M) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
- [Summarizing and Querying Multiple Papers with LangChain](https://youtu.be/p_MQRWH5Y6k) by [Automata Learning Lab](https://www.youtube.com/@automatalearninglab)
- [Using Langchain (and `Replit`) through `Tana`, ask `Google`/`Wikipedia`/`Wolfram Alpha` to fill out a table](https://youtu.be/Webau9lEzoI) by [Stian Håklev](https://www.youtube.com/@StianHaklev)
- [Langchain PDF App (GUI) | Create a ChatGPT For Your `PDF` in Python](https://youtu.be/wUAUdEw5oxM) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- [Auto-GPT with LangChain 🔥 | Create Your Own Personal AI Assistant](https://youtu.be/imDfPmMKEjM) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
- [Create Your OWN Slack AI Assistant with Python & LangChain](https://youtu.be/3jFXRNn2Bu8) by [Dave Ebbelaar](https://www.youtube.com/@daveebbelaar)
- [How to Create LOCAL Chatbots with GPT4All and LangChain [Full Guide]](https://youtu.be/4p1Fojur8Zw) by [Liam Ottley](https://www.youtube.com/@LiamOttley)
- [Build a `Multilingual PDF` Search App with LangChain, `Cohere` and `Bubble`](https://youtu.be/hOrtuumOrv8) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
- [Building a LangChain Agent (code-free!) Using `Bubble` and `Flowise`](https://youtu.be/jDJIIVWTZDE) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
- [Build a LangChain-based Semantic PDF Search App with No-Code Tools Bubble and Flowise](https://youtu.be/s33v5cIeqA4) by [Menlo Park Lab](https://www.youtube.com/@menloparklab)
- [LangChain Memory Tutorial | Building a ChatGPT Clone in Python](https://youtu.be/Cwq91cj2Pnc) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- [ChatGPT For Your DATA | Chat with Multiple Documents Using LangChain](https://youtu.be/TeDgIDqQmzs) by [Data Science Basics](https://www.youtube.com/@datasciencebasics)
- [`Llama Index`: Chat with Documentation using URL Loader](https://youtu.be/XJRoDEctAwA) by [Merk](https://www.youtube.com/@merksworld)
- [Using OpenAI, LangChain, and `Gradio` to Build Custom GenAI Applications](https://youtu.be/1MsmqMg3yUc) by [David Hundley](https://www.youtube.com/@dkhundley)
- [LangChain, Chroma DB, OpenAI Beginner Guide | ChatGPT with your PDF](https://youtu.be/FuqdVNB_8c0)
- [Build AI chatbot with custom knowledge base using OpenAI API and GPT Index](https://youtu.be/vDZAZuaXf48) by [Irina Nik](https://www.youtube.com/@irina_nik)
- [Build Your Own Auto-GPT Apps with LangChain (Python Tutorial)](https://youtu.be/NYSWn1ipbgg) by [Dave Ebbelaar](https://www.youtube.com/@daveebbelaar)
- [Chat with Multiple `PDFs` | LangChain App Tutorial in Python (Free LLMs and Embeddings)](https://youtu.be/dXxQ0LR-3Hg) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- [Chat with a `CSV` | `LangChain Agents` Tutorial (Beginners)](https://youtu.be/tjeti5vXWOU) by [Alejandro AO - Software & Ai](https://www.youtube.com/@alejandro_ao)
- [Create Your Own ChatGPT with `PDF` Data in 5 Minutes (LangChain Tutorial)](https://youtu.be/au2WVVGUvc8) by [Liam Ottley](https://www.youtube.com/@LiamOttley)
- [Build a Custom Chatbot with OpenAI: `GPT-Index` & LangChain | Step-by-Step Tutorial](https://youtu.be/FIDv6nc4CgU) by [Fabrikod](https://www.youtube.com/@fabrikod)
- [`Flowise` is an open-source no-code UI visual tool to build 🦜🔗LangChain applications](https://youtu.be/CovAPtQPU0k) by [Cobus Greyling](https://www.youtube.com/@CobusGreylingZA)
- [LangChain & GPT 4 For Data Analysis: The `Pandas` Dataframe Agent](https://youtu.be/rFQ5Kmkd4jc) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
- [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw) by [Toolfinder AI](https://www.youtube.com/@toolfinderai)
- [How to build with Langchain 10x easier | ⛓️ LangFlow & `Flowise`](https://youtu.be/Ya1oGL7ZTvU) by [AI Jason](https://www.youtube.com/@AIJasonZ)
- [Getting Started With LangChain In 20 Minutes- Build Celebrity Search Application](https://youtu.be/_FpT1cwcSLg) by [Krish Naik](https://www.youtube.com/@krishnaik06)
- ⛓ [Vector Embeddings Tutorial Code Your Own AI Assistant with `GPT-4 API` + LangChain + NLP](https://youtu.be/yfHHvmaMkcA?si=5uJhxoh2tvdnOXok) by [FreeCodeCamp.org](https://www.youtube.com/@freecodecamp)
- ⛓ [Fully LOCAL `Llama 2` Q&A with LangChain](https://youtu.be/wgYctKFnQ74?si=UX1F3W-B3MqF4-K-) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- ⛓ [Fully LOCAL `Llama 2` Langchain on CPU](https://youtu.be/yhECvKMu8kM?si=IvjxwlA1c09VwHZ4) by [1littlecoder](https://www.youtube.com/@1littlecoder)
- ⛓ [Build LangChain Audio Apps with Python in 5 Minutes](https://youtu.be/7w7ysaDz2W4?si=BvdMiyHhormr2-vr) by [AssemblyAI](https://www.youtube.com/@AssemblyAI)
- ⛓ [`Voiceflow` & `Flowise`: Want to Beat Competition? New Tutorial with Real AI Chatbot](https://youtu.be/EZKkmeFwag0?si=-4dETYDHEstiK_bb) by [AI SIMP](https://www.youtube.com/@aisimp)
- ⛓ [THIS Is How You Build Production-Ready AI Apps (`LangSmith` Tutorial)](https://youtu.be/tFXm5ijih98?si=lfiqpyaivxHFyI94) by [Dave Ebbelaar](https://www.youtube.com/@daveebbelaar)
- ⛓ [Build POWERFUL LLM Bots EASILY with Your Own Data - `Embedchain` - Langchain 2.0? (Tutorial)](https://youtu.be/jE24Y_GasE8?si=0yEDZt3BK5Q-LIuF) by [WorldofAI](https://www.youtube.com/@intheworldofai)
- ⛓ [`Code Llama` powered Gradio App for Coding: Runs on CPU](https://youtu.be/AJOhV6Ryy5o?si=ouuQT6IghYlc1NEJ) by [AI Anytime](https://www.youtube.com/@AIAnytime)
- ⛓ [LangChain Complete Course in One Video | Develop LangChain (AI) Based Solutions for Your Business](https://youtu.be/j9mQd-MyIg8?si=_wlNT3nP2LpDKztZ) by [UBprogrammer](https://www.youtube.com/@UBprogrammer)
- ⛓ [How to Run `LLaMA` Locally on CPU or GPU | Python & Langchain & CTransformers Guide](https://youtu.be/SvjWDX2NqiM?si=DxFml8XeGhiLTzLV) by [Code With Prince](https://www.youtube.com/@CodeWithPrince)
- ⛓ [PyData Heidelberg #11 - TimeSeries Forecasting & LLM Langchain](https://www.youtube.com/live/Glbwb5Hxu18?si=PIEY8Raq_C9PCHuW) by [PyData](https://www.youtube.com/@PyDataTV)
- ⛓ [Prompt Engineering in Web Development | Using LangChain and Templates with OpenAI](https://youtu.be/pK6WzlTOlYw?si=fkcDQsBG2h-DM8uQ) by [Akamai Developer
](https://www.youtube.com/@AkamaiDeveloper)
- ⛓ [Retrieval-Augmented Generation (RAG) using LangChain and `Pinecone` - The RAG Special Episode](https://youtu.be/J_tCD_J6w3s?si=60Mnr5VD9UED9bGG) by [Generative AI and Data Science On AWS](https://www.youtube.com/@GenerativeAIDataScienceOnAWS)
- ⛓ [`LLAMA2 70b-chat` Multiple Documents Chatbot with Langchain & Streamlit |All OPEN SOURCE|Replicate API](https://youtu.be/vhghB81vViM?si=dszzJnArMeac7lyc) by [DataInsightEdge](https://www.youtube.com/@DataInsightEdge01)
- ⛓ [Chatting with 44K Fashion Products: LangChain Opportunities and Pitfalls](https://youtu.be/Zudgske0F_s?si=8HSshHoEhh0PemJA) by [Rabbitmetrics](https://www.youtube.com/@rabbitmetrics)
- ⛓ [Structured Data Extraction from `ChatGPT` with LangChain](https://youtu.be/q1lYg8JISpQ?si=0HctzOHYZvq62sve) by [MG](https://www.youtube.com/@MG_cafe)
- ⛓ [Chat with Multiple PDFs using `Llama 2`, `Pinecone` and LangChain (Free LLMs and Embeddings)](https://youtu.be/TcJ_tVSGS4g?si=FZYnMDJyoFfL3Z2i) by [Muhammad Moin](https://www.youtube.com/@muhammadmoinfaisal)
- ⛓ [Integrate Audio into `LangChain.js` apps in 5 Minutes](https://youtu.be/hNpUSaYZIzs?si=Gb9h7W9A8lzfvFKi) by [AssemblyAI](https://www.youtube.com/@AssemblyAI)
- ⛓ [`ChatGPT` for your data with Local LLM](https://youtu.be/bWrjpwhHEMU?si=uM6ZZ18z9og4M90u) by [Jacob Jedryszek](https://www.youtube.com/@jj09)
- ⛓ [Training `Chatgpt` with your personal data using langchain step by step in detail](https://youtu.be/j3xOMde2v9Y?si=179HsiMU-hEPuSs4) by [NextGen Machines](https://www.youtube.com/@MayankGupta-kb5yc)
- ⛓ [Use ANY language in `LangSmith` with REST](https://youtu.be/7BL0GEdMmgY?si=iXfOEdBLqXF6hqRM) by [Nerding I/O](https://www.youtube.com/@nerding_io)
- ⛓ [How to Leverage the Full Potential of LLMs for Your Business with Langchain - Leon Ruddat](https://youtu.be/vZmoEa7oWMg?si=ZhMmydq7RtkZd56Q) by [PyData](https://www.youtube.com/@PyDataTV)
- ⛓ [`ChatCSV` App: Chat with CSV files using LangChain and `Llama 2`](https://youtu.be/PvsMg6jFs8E?si=Qzg5u5gijxj933Ya) by [Muhammad Moin](https://www.youtube.com/@muhammadmoinfaisal)
- ⛓ [Build Chat PDF app in Python with LangChain, OpenAI, Streamlit | Full project | Learn Coding](https://www.youtube.com/watch?v=WYzFzZg4YZI) by [Jutsupoint](https://www.youtube.com/@JutsuPoint)
- ⛓ [Build Eminem Bot App with LangChain, Streamlit, OpenAI | Full Python Project | Tutorial | AI ChatBot](https://www.youtube.com/watch?v=a2shHB4MRZ4) by [Jutsupoint](https://www.youtube.com/@JutsuPoint)
### [Prompt Engineering and LangChain](https://www.youtube.com/watch?v=muXbPpG_ys4&list=PLEJK-H61Xlwzm5FYLDdKt_6yibO33zoMW) by [Venelin Valkov](https://www.youtube.com/@venelin_valkov)
- [Getting Started with LangChain: Load Custom Data, Run OpenAI Models, Embeddings and `ChatGPT`](https://www.youtube.com/watch?v=muXbPpG_ys4)
- [Loaders, Indexes & Vectorstores in LangChain: Question Answering on `PDF` files with `ChatGPT`](https://www.youtube.com/watch?v=FQnvfR8Dmr0)
- [LangChain Models: `ChatGPT`, `Flan Alpaca`, `OpenAI Embeddings`, Prompt Templates & Streaming](https://www.youtube.com/watch?v=zy6LiK5F5-s)
- [LangChain Chains: Use `ChatGPT` to Build Conversational Agents, Summaries and Q&A on Text With LLMs](https://www.youtube.com/watch?v=h1tJZQPcimM)
- [Analyze Custom CSV Data with `GPT-4` using Langchain](https://www.youtube.com/watch?v=Ew3sGdX8at4)
- [Build ChatGPT Chatbots with LangChain Memory: Understanding and Implementing Memory in Conversations](https://youtu.be/CyuUlf54wTs)
Only videos with 40K+ views:
- [Using `ChatGPT` with YOUR OWN Data. This is magical. (LangChain `OpenAI API`)](https://youtu.be/9AXP7tCI9PI)
- [Chat with Multiple `PDFs` | LangChain App Tutorial in Python (Free LLMs and Embeddings)](https://youtu.be/dXxQ0LR-3Hg?si=pjXKhsHRzn10vOqX)
- [`Hugging Face` + Langchain in 5 mins | Access 200k+ FREE AI models for your AI apps](https://youtu.be/_j7JEDWuqLE?si=psimQscN3qo2dOa9)
- [LangChain Crash Course For Beginners | LangChain Tutorial](https://youtu.be/nAmC7SoVLd8?si=qJdvyG5-rnjqfdj1)
- [Vector Embeddings Tutorial Code Your Own AI Assistant with GPT-4 API + LangChain + NLP](https://youtu.be/yfHHvmaMkcA?si=UBP3yw50cLm3a2nj)
- [Development with Large Language Models Tutorial `OpenAI`, Langchain, Agents, `Chroma`](https://youtu.be/xZDB1naRUlk?si=v8J1q6oFHRyTkf7Y)
- [Langchain: `PDF` Chat App (GUI) | ChatGPT for Your PDF FILES | Step-by-Step Tutorial](https://youtu.be/RIWbalZ7sTo?si=LbKsCcuyv0BtnrTY)
- [Vector Search `RAG` Tutorial Combine Your Data with LLMs with Advanced Search](https://youtu.be/JEBDfGqrAUA?si=pD7oxpfwWeJCxfBt)
- [LangChain Crash Course for Beginners](https://youtu.be/lG7Uxts9SXs?si=Yte4S5afN7KNCw0F)
- [Learn `RAG` From Scratch Python AI Tutorial from a LangChain Engineer](https://youtu.be/sVcwVQRHIc8?si=_LN4g0vOgSdtlB3S)
- [`Llama 2` in LangChain — FIRST Open Source Conversational Agent!](https://youtu.be/6iHVJyX2e50?si=rtq1maPrzWKHbwVV)
- [LangChain Tutorial for Beginners | Generative AI Series](https://youtu.be/cQUUkZnyoD0?si=KYz-bvcocdqGh9f_)
- [Chatbots with `RAG`: LangChain Full Walkthrough](https://youtu.be/LhnCsygAvzY?si=yS7T98VLfcWdkDek)
- [LangChain Explained In 15 Minutes - A MUST Learn For Python Programmers](https://youtu.be/mrjq3lFz23s?si=wkQGcSKUJjuiiEPf)
- [LLM Project | End to End LLM Project Using Langchain, `OpenAI` in Finance Domain](https://youtu.be/MoqgmWV1fm8?si=oVl-5kJVgd3a07Y_)
- [What is LangChain?](https://youtu.be/1bUy-1hGZpI?si=NZ0D51VM5y-DhjGe)
- [`RAG` + Langchain Python Project: Easy AI/Chat For Your Doc](https://youtu.be/tcqEUSNCn8I?si=RLcWPBVLIErRqdmU)
- [Getting Started With LangChain In 20 Minutes- Build Celebrity Search Application](https://youtu.be/_FpT1cwcSLg?si=X9qVazlXYucN_JBP)
- [LangChain GEN AI Tutorial 6 End-to-End Projects using OpenAI, Google `Gemini Pro`, `LLAMA2`](https://youtu.be/x0AnCE9SE4A?si=_92gJYm7kb-V2bi0)
- [Complete Langchain GEN AI Crash Course With 6 End To End LLM Projects With OPENAI, `LLAMA2`, `Gemini Pro`](https://youtu.be/aWKrL4z5H6w?si=NVLi7Yiq0ccE7xXE)
- [AI Leader Reveals The Future of AI AGENTS (LangChain CEO)](https://youtu.be/9ZhbA0FHZYc?si=1r4P6kRvKVvEhRgE)
- [Learn How To Query Pdf using Langchain Open AI in 5 min](https://youtu.be/5Ghv-F1wF_0?si=ZZRjrWfeiFOVrcvu)
- [Reliable, fully local RAG agents with `LLaMA3`](https://youtu.be/-ROS6gfYIts?si=75CXA8W_BbnkIxcV)
- [Learn `LangChain.js` - Build LLM apps with JavaScript and `OpenAI`](https://youtu.be/HSZ_uaif57o?si=Icj-RAhwMT-vHaYA)
- [LLM Project | End to End LLM Project Using LangChain, Google Palm In Ed-Tech Industry](https://youtu.be/AjQPRomyd-k?si=eC3NT6kn02Lhpz-_)
- [Chatbot Answering from Your Own Knowledge Base: Langchain, `ChatGPT`, `Pinecone`, and `Streamlit`: | Code](https://youtu.be/nAKhxQ3hcMA?si=9Zd_Nd_jiYhtml5w)
- [LangChain is AMAZING | Quick Python Tutorial](https://youtu.be/I4mFqyqFkxg?si=aJ66qh558OfNAczD)
- [`GirlfriendGPT` - AI girlfriend with LangChain](https://youtu.be/LiN3D1QZGQw?si=kZR-lnJwixeVrjmh)
- [Using NEW `MPT-7B` in `Hugging Face` and LangChain](https://youtu.be/DXpk9K7DgMo?si=99JDpV_ueimwJhMi)
- [LangChain - COMPLETE TUTORIAL - Basics to advanced concept!](https://youtu.be/a89vqgK-Qcs?si=0aVO2EOqsw7GE5e3)
- [LangChain Agents: Simply Explained!](https://youtu.be/Xi9Ui-9qcPw?si=DCuG7nGx8dxcfhkx)
- [Chat With Multiple `PDF` Documents With Langchain And Google `Gemini Pro`](https://youtu.be/uus5eLz6smA?si=YUwvHtaZsGeIl0WD)
- [LLM Project | End to end LLM project Using Langchain, `Google Palm` in Retail Industry](https://youtu.be/4wtrl4hnPT8?si=_eOKPpdLfWu5UXMQ)
- [Tutorial | Chat with any Website using Python and Langchain](https://youtu.be/bupx08ZgSFg?si=KRrjYZFnuLsstGwW)
- [Prompt Engineering And LLM's With LangChain In One Shot-Generative AI](https://youtu.be/t2bSApmPzU4?si=87vPQQtYEWTyu2Kx)
- [Build a Custom Chatbot with `OpenAI`: `GPT-Index` & LangChain | Step-by-Step Tutorial](https://youtu.be/FIDv6nc4CgU?si=gR1u3DUG9lvzBIKK)
- [Search Your `PDF` App using Langchain, `ChromaDB`, and Open Source LLM: No OpenAI API (Runs on CPU)](https://youtu.be/rIV1EseKwU4?si=UxZEoXSiPai8fXgl)
- [Building a `RAG` application from scratch using Python, LangChain, and the `OpenAI API`](https://youtu.be/BrsocJb-fAo?si=hvkh9iTGzJ-LnsX-)
- [Function Calling via `ChatGPT API` - First Look With LangChain](https://youtu.be/0-zlUy7VUjg?si=Vc6LFseckEc6qvuk)
- [Private GPT, free deployment! Langchain-Chachat helps you easily play with major mainstream AI models! | Zero Degree Commentary](https://youtu.be/3LLUyaHP-3I?si=AZumEeFXsvqaLl0f)
- [Create a ChatGPT clone using `Streamlit` and LangChain](https://youtu.be/IaTiyQ2oYUQ?si=WbgsYmqPDnMidSUK)
- [What's next for AI agents ft. LangChain's Harrison Chase](https://youtu.be/pBBe1pk8hf4?si=H4vdBF9nmkNZxiHt)
- [`LangFlow`: Build Chatbots without Writing Code - LangChain](https://youtu.be/KJ-ux3hre4s?si=TJuDu4bAlva1myNL)
- [Building a LangChain Custom Medical Agent with Memory](https://youtu.be/6UFtRwWnHws?si=wymYad26VgigRkHy)
- [`Ollama` meets LangChain](https://youtu.be/k_1pOF1mj8k?si=RlBiCrmaR3s7SnMK)
- [End To End LLM Langchain Project using `Pinecone` Vector Database](https://youtu.be/erUfLIi9OFM?si=aHpuHXdIEmAfS4eF)
- [`LLaMA2` with LangChain - Basics | LangChain TUTORIAL](https://youtu.be/cIRzwSXB4Rc?si=FUs0OLVJpzKhut0h)
- [Understanding `ReACT` with LangChain](https://youtu.be/Eug2clsLtFs?si=imgj534ggxlypS0d)
---------------------
⛓ icon marks a new addition [last update 2024-02-04]
[Updated 2024-05-16]

View File

@@ -1,27 +1,10 @@
# langchain-core
## 0.1.7 (Jan 5, 2024)
#### Deleted
No deletions.
## 0.1.x
#### Deprecated
- `BaseChatModel` methods `__call__`, `call_as_llm`, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.invoke` instead.
- `BaseChatModel` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseChatModel.ainvoke` instead.
- `BaseLLM` methods `__call__, `predict`, `predict_messages`. Will be removed in 0.2.0. Use `BaseLLM.invoke` instead.
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.
#### Fixed
- Restrict recursive URL scraping: [#15559](https://github.com/langchain-ai/langchain/pull/15559)
#### Added
No additions.
#### Beta
- Marked `langchain_core.load.load` and `langchain_core.load.loads` as beta.
- Marked `langchain_core.beta.runnables.context.ContextGet` and `langchain_core.beta.runnables.context.ContextSet` as beta.
- `BaseLLM` methods `apredict`, `apredict_messages`. Will be removed in 0.2.0. Use `BaseLLM.ainvoke` instead.

View File

@@ -1,16 +1,73 @@
# langchain
## 0.2.0
### Deleted
As of release 0.2.0, `langchain` is required to be integration-agnostic. This means that code in `langchain` should not by default instantiate any specific chat models, llms, embedding models, vectorstores etc; instead, the user will be required to specify those explicitly.
The following functions and classes require an explicit LLM to be passed as an argument:
- `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit`
- `langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit`
- `langchain.chains.openai_functions.get_openapi_chain`
- `langchain.chains.router.MultiRetrievalQAChain.from_retrievers`
- `langchain.indexes.VectorStoreIndexWrapper.query`
- `langchain.indexes.VectorStoreIndexWrapper.query_with_sources`
- `langchain.indexes.VectorStoreIndexWrapper.aquery_with_sources`
- `langchain.chains.flare.FlareChain`
The following classes now require passing an explicit Embedding model as an argument:
- `langchain.indexes.VectostoreIndexCreator`
The following code has been removed:
- `langchain.natbot.NatBotChain.from_default` removed in favor of the `from_llm` class method.
### Deprecated
We have two main types of deprecations:
1. Code that was moved from `langchain` into another package (e.g, `langchain-community`)
If you try to import it from `langchain`, the import will keep on working, but will raise a deprecation warning. The warning will provide a replacement import statement.
```python
python -c "from langchain.document_loaders.markdown import UnstructuredMarkdownLoader"
```
```python
LangChainDeprecationWarning: Importing UnstructuredMarkdownLoader from langchain.document_loaders is deprecated. Please replace deprecated imports:
>> from langchain.document_loaders import UnstructuredMarkdownLoader
with new imports of:
>> from langchain_community.document_loaders import UnstructuredMarkdownLoader
```
We will continue supporting the imports in `langchain` until release 0.4 as long as the relevant package where the code lives is installed. (e.g., as long as `langchain_community` is installed.)
However, we advise for users to not rely on these imports and instead migrate to the new imports. To help with this process, were releasing a migration script via the LangChain CLI. See further instructions in migration guide.
1. Code that has better alternatives available and will eventually be removed, so theres only a single way to do things. (e.g., `predict_messages` method in ChatModels has been deprecated in favor of `invoke`).
Many of these were marked for removal in 0.2. We have bumped the removal to 0.3.
## 0.1.0 (Jan 5, 2024)
#### Deleted
### Deleted
No deletions.
#### Deprecated
### Deprecated
Deprecated classes and methods will be removed in 0.2.0
| Deprecated | Alternative | Reason |
| Deprecated | Alternative | Reason |
|---------------------------------|-----------------------------------|------------------------------------------------|
| ChatVectorDBChain | ConversationalRetrievalChain | More general to all retrievers |
| create_ernie_fn_chain | create_ernie_fn_runnable | Use LCEL under the hood |

654
docs/docs/concepts.mdx Normal file
View File

@@ -0,0 +1,654 @@
# Conceptual guide
import ThemedImage from '@theme/ThemedImage';
import useBaseUrl from '@docusaurus/useBaseUrl';
This section contains introductions to key parts of LangChain.
## Architecture
LangChain as a framework consists of a number of packages.
### `langchain-core`
This package contains base abstractions of different components and ways to compose them together.
The interfaces for core components like LLMs, vectorstores, retrievers and more are defined here.
No third party integrations are defined here.
The dependencies are kept purposefully very lightweight.
### Partner packages
While the long tail of integrations are in `langchain-community`, we split popular integrations into their own packages (e.g. `langchain-openai`, `langchain-anthropic`, etc).
This was done in order to improve support for these important integrations.
### `langchain`
The main `langchain` package contains chains, agents, and retrieval strategies that make up an application's cognitive architecture.
These are NOT third party integrations.
All chains, agents, and retrieval strategies here are NOT specific to any one integration, but rather generic across all integrations.
### `langchain-community`
This package contains third party integrations that are maintained by the LangChain community.
Key partner packages are separated out (see below).
This contains all integrations for various components (LLMs, vectorstores, retrievers).
All dependencies in this package are optional to keep the package as lightweight as possible.
### [`langgraph`](https://langchain-ai.github.io/langgraph)
`langgraph` is an extension of `langchain` aimed at
building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
LangGraph exposes high level interfaces for creating common types of agents, as well as a low-level API for constructing more contr
### [`langserve`](/docs/langserve)
A package to deploy LangChain chains as REST APIs. Makes it easy to get a production ready API up and running.
### [LangSmith](https://docs.smith.langchain.com)
A developer platform that lets you debug, test, evaluate, and monitor LLM applications.
<ThemedImage
alt="Diagram outlining the hierarchical organization of the LangChain framework, displaying the interconnected parts across multiple layers."
sources={{
light: useBaseUrl('/svg/langchain_stack.svg'),
dark: useBaseUrl('/svg/langchain_stack_dark.svg'),
}}
title="LangChain Framework Overview"
/>
## LangChain Expression Language (LCEL)
LangChain Expression Language, or LCEL, is a declarative way to chain LangChain components.
LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (weve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:
**First-class streaming support**
When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens.
**Async support**
Any chain built with LCEL can be called both with the synchronous API (eg. in your Jupyter notebook while prototyping) as well as with the asynchronous API (eg. in a [LangServe](/docs/langserve/) server). This enables using the same code for prototypes and in production, with great performance, and the ability to handle many concurrent requests in the same server.
**Optimized parallel execution**
Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it, both in the sync and the async interfaces, for the smallest possible latency.
**Retries and fallbacks**
Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. Were currently working on adding streaming support for retries/fallbacks, so you can get the added reliability without any latency cost.
**Access intermediate results**
For more complex chains its often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. You can stream intermediate results, and its available on every [LangServe](/docs/langserve) server.
**Input and output schemas**
Input and output schemas give every LCEL chain Pydantic and JSONSchema schemas inferred from the structure of your chain. This can be used for validation of inputs and outputs, and is an integral part of LangServe.
[**Seamless LangSmith tracing**](https://docs.smith.langchain.com)
As your chains get more and more complex, it becomes increasingly important to understand what exactly is happening at every step.
With LCEL, **all** steps are automatically logged to [LangSmith](https://docs.smith.langchain.com/) for maximum observability and debuggability.
[**Seamless LangServe deployment**](/docs/langserve)
Any chain created with LCEL can be easily deployed using [LangServe](/docs/langserve).
### Runnable interface
To make it as easy as possible to create custom chains, we've implemented a ["Runnable"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. Many LangChain components implement the `Runnable` protocol, including chat models, LLMs, output parsers, retrievers, prompt templates, and more. There are also several useful primitives for working with runnables, which you can read about below.
This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way.
The standard interface includes:
- [`stream`](#stream): stream back chunks of the response
- [`invoke`](#invoke): call the chain on an input
- [`batch`](#batch): call the chain on a list of inputs
These also have corresponding async methods that should be used with [asyncio](https://docs.python.org/3/library/asyncio.html) `await` syntax for concurrency:
- `astream`: stream back chunks of the response async
- `ainvoke`: call the chain on an input async
- `abatch`: call the chain on a list of inputs async
- `astream_log`: stream back intermediate steps as they happen, in addition to the final response
- `astream_events`: **beta** stream events as they happen in the chain (introduced in `langchain-core` 0.1.14)
The **input type** and **output type** varies by component:
| Component | Input Type | Output Type |
| --- | --- | --- |
| Prompt | Dictionary | PromptValue |
| ChatModel | Single string, list of chat messages or a PromptValue | ChatMessage |
| LLM | Single string, list of chat messages or a PromptValue | String |
| OutputParser | The output of an LLM or ChatModel | Depends on the parser |
| Retriever | Single string | List of Documents |
| Tool | Single string or dictionary, depending on the tool | Depends on the tool |
All runnables expose input and output **schemas** to inspect the inputs and outputs:
- `input_schema`: an input Pydantic model auto-generated from the structure of the Runnable
- `output_schema`: an output Pydantic model auto-generated from the structure of the Runnable
## Components
LangChain provides standard, extendable interfaces and external integrations for various components useful for building with LLMs.
Some components LangChain implements, some components we rely on third-party integrations for, and others are a mix.
### Chat models
Language models that use a sequence of messages as inputs and return chat messages as outputs (as opposed to using plain text).
These are traditionally newer models (older models are generally `LLMs`, see above).
Chat models support the assignment of distinct roles to conversation messages, helping to distinguish messages from the AI, users, and instructions such as system messages.
Although the underlying models are messages in, message out, the LangChain wrappers also allow these models to take a string as input. This means you can easily use chat models in place of LLMs.
When a string is passed in as input, it is converted to a HumanMessage and then passed to the underlying model.
LangChain does not provide any ChatModels, rather we rely on third party integrations.
We have some standardized parameters when constructing ChatModels:
- `model`: the name of the model
ChatModels also accept other parameters that are specific to that integration.
:::important
**Tool Calling** Some chat models have been fine-tuned for tool calling and provide a dedicated API for tool calling.
Generally, such models are better at tool calling than non-fine-tuned models, and are recommended for use cases that require tool calling.
Please see the [tool calling section](/docs/concepts/#functiontool-calling) for more information.
:::
### LLMs
Language models that takes a string as input and returns a string.
These are traditionally older models (newer models generally are `ChatModels`, see below).
Although the underlying models are string in, string out, the LangChain wrappers also allow these models to take messages as input.
This makes them interchangeable with ChatModels.
When messages are passed in as input, they will be formatted into a string under the hood before being passed to the underlying model.
LangChain does not provide any LLMs, rather we rely on third party integrations.
### Messages
Some language models take a list of messages as input and return a message.
There are a few different types of messages.
All messages have a `role`, `content`, and `response_metadata` property.
The `role` describes WHO is saying the message.
LangChain has different message classes for different roles.
The `content` property describes the content of the message.
This can be a few different things:
- A string (most models deal this type of content)
- A List of dictionaries (this is used for multimodal input, where the dictionary contains information about that input type and that input location)
#### HumanMessage
This represents a message from the user.
#### AIMessage
This represents a message from the model. In addition to the `content` property, these messages also have:
**`response_metadata`**
The `response_metadata` property contains additional metadata about the response. The data here is often specific to each model provider.
This is where information like log-probs and token usage may be stored.
**`tool_calls`**
These represent a decision from an language model to call a tool. They are included as part of an `AIMessage` output.
They can be accessed from there with the `.tool_calls` property.
This property returns a list of dictionaries. Each dictionary has the following keys:
- `name`: The name of the tool that should be called.
- `args`: The arguments to that tool.
- `id`: The id of that tool call.
#### SystemMessage
This represents a system message, which tells the model how to behave. Not every model provider supports this.
#### FunctionMessage
This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result.
#### ToolMessage
This represents the result of a tool call. This is distinct from a FunctionMessage in order to match OpenAI's `function` and `tool` message types. In addition to `role` and `content`, this message has a `tool_call_id` parameter which conveys the id of the call to the tool that was called to produce this result.
### Prompt templates
Prompt templates help to translate user input and parameters into instructions for a language model.
This can be used to guide a model's response, helping it understand the context and generate relevant and coherent language-based output.
Prompt Templates take as input a dictionary, where each key represents a variable in the prompt template to fill in.
Prompt Templates output a PromptValue. This PromptValue can be passed to an LLM or a ChatModel, and can also be cast to a string or a list of messages.
The reason this PromptValue exists is to make it easy to switch between strings and messages.
There are a few different types of prompt templates
#### String PromptTemplates
These prompt templates are used to format a single string, and generally are used for simpler inputs.
For example, a common way to construct and use a PromptTemplate is as follows:
```python
from langchain_core.prompts import PromptTemplate
prompt_template = PromptTemplate.from_template("Tell me a joke about {topic}")
prompt_template.invoke({"topic": "cats"})
```
#### ChatPromptTemplates
These prompt templates are used to format a list of messages. These "templates" consist of a list of templates themselves.
For example, a common way to construct and use a ChatPromptTemplate is as follows:
```python
from langchain_core.prompts import ChatPromptTemplate
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant"),
("user", "Tell me a joke about {topic}")
])
prompt_template.invoke({"topic": "cats"})
```
In the above example, this ChatPromptTemplate will construct two messages when called.
The first is a system message, that has no variables to format.
The second is a HumanMessage, and will be formatted by the `topic` variable the user passes in.
#### MessagesPlaceholder
This prompt template is responsible for adding a list of messages in a particular place.
In the above ChatPromptTemplate, we saw how we could format two messages, each one a string.
But what if we wanted the user to pass in a list of messages that we would slot into a particular spot?
This is how you use MessagesPlaceholder.
```python
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import HumanMessage
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant"),
MessagesPlaceholder("msgs")
])
prompt_template.invoke({"msgs": [HumanMessage(content="hi!")]})
```
This will produce a list of two messages, the first one being a system message, and the second one being the HumanMessage we passed in.
If we had passed in 5 messages, then it would have produced 6 messages in total (the system message plus the 5 passed in).
This is useful for letting a list of messages be slotted into a particular spot.
An alternative way to accomplish the same thing without using the `MessagesPlaceholder` class explicitly is:
```python
prompt_template = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant"),
("placeholder", "{msgs}") # <-- This is the changed part
])
```
### Example selectors
One common prompting technique for achieving better performance is to include examples as part of the prompt.
This gives the language model concrete examples of how it should behave.
Sometimes these examples are hardcoded into the prompt, but for more advanced situations it may be nice to dynamically select them.
Example Selectors are classes responsible for selecting and then formatting examples into prompts.
### Output parsers
:::note
The information here refers to parsers that take a text output from a model try to parse it into a more structured representation.
More and more models are supporting function (or tool) calling, which handles this automatically.
It is recommended to use function/tool calling rather than output parsing.
See documentation for that [here](/docs/concepts/#function-tool-calling).
:::
Responsible for taking the output of a model and transforming it to a more suitable format for downstream tasks.
Useful when you are using LLMs to generate structured data, or to normalize output from chat models and LLMs.
LangChain has lots of different types of output parsers. This is a list of output parsers LangChain supports. The table below has various pieces of information:
**Name**: The name of the output parser
**Supports Streaming**: Whether the output parser supports streaming.
**Has Format Instructions**: Whether the output parser has format instructions. This is generally available except when (a) the desired schema is not specified in the prompt but rather in other parameters (like OpenAI function calling), or (b) when the OutputParser wraps another OutputParser.
**Calls LLM**: Whether this output parser itself calls an LLM. This is usually only done by output parsers that attempt to correct misformatted output.
**Input Type**: Expected input type. Most output parsers work on both strings and messages, but some (like OpenAI Functions) need a message with specific kwargs.
**Output Type**: The output type of the object returned by the parser.
**Description**: Our commentary on this output parser and when to use it.
| Name | Supports Streaming | Has Format Instructions | Calls LLM | Input Type | Output Type | Description |
|-----------------|--------------------|-------------------------------|-----------|----------------------------------|----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [JSON](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.json.JsonOutputParser.html#langchain_core.output_parsers.json.JsonOutputParser) | ✅ | ✅ | | `str` \| `Message` | JSON object | Returns a JSON object as specified. You can specify a Pydantic model and it will return JSON for that model. Probably the most reliable output parser for getting structured data that does NOT use function calling. |
| [XML](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.xml.XMLOutputParser.html#langchain_core.output_parsers.xml.XMLOutputParser) | ✅ | ✅ | | `str` \| `Message` | `dict` | Returns a dictionary of tags. Use when XML output is needed. Use with models that are good at writing XML (like Anthropic's). |
| [CSV](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.list.CommaSeparatedListOutputParser.html#langchain_core.output_parsers.list.CommaSeparatedListOutputParser) | ✅ | ✅ | | `str` \| `Message` | `List[str]` | Returns a list of comma separated values. |
| [OutputFixing](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.fix.OutputFixingParser.html#langchain.output_parsers.fix.OutputFixingParser) | | | ✅ | `str` \| `Message` | | Wraps another output parser. If that output parser errors, then this will pass the error message and the bad output to an LLM and ask it to fix the output. |
| [RetryWithError](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.retry.RetryWithErrorOutputParser.html#langchain.output_parsers.retry.RetryWithErrorOutputParser) | | | ✅ | `str` \| `Message` | | Wraps another output parser. If that output parser errors, then this will pass the original inputs, the bad output, and the error message to an LLM and ask it to fix it. Compared to OutputFixingParser, this one also sends the original instructions. |
| [Pydantic](https://api.python.langchain.com/en/latest/output_parsers/langchain_core.output_parsers.pydantic.PydanticOutputParser.html#langchain_core.output_parsers.pydantic.PydanticOutputParser) | | ✅ | | `str` \| `Message` | `pydantic.BaseModel` | Takes a user defined Pydantic model and returns data in that format. |
| [YAML](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.yaml.YamlOutputParser.html#langchain.output_parsers.yaml.YamlOutputParser) | | ✅ | | `str` \| `Message` | `pydantic.BaseModel` | Takes a user defined Pydantic model and returns data in that format. Uses YAML to encode it. |
| [PandasDataFrame](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.pandas_dataframe.PandasDataFrameOutputParser.html#langchain.output_parsers.pandas_dataframe.PandasDataFrameOutputParser) | | ✅ | | `str` \| `Message` | `dict` | Useful for doing operations with pandas DataFrames. |
| [Enum](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.enum.EnumOutputParser.html#langchain.output_parsers.enum.EnumOutputParser) | | ✅ | | `str` \| `Message` | `Enum` | Parses response into one of the provided enum values. |
| [Datetime](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.datetime.DatetimeOutputParser.html#langchain.output_parsers.datetime.DatetimeOutputParser) | | ✅ | | `str` \| `Message` | `datetime.datetime` | Parses response into a datetime string. |
| [Structured](https://api.python.langchain.com/en/latest/output_parsers/langchain.output_parsers.structured.StructuredOutputParser.html#langchain.output_parsers.structured.StructuredOutputParser) | | ✅ | | `str` \| `Message` | `Dict[str, str]` | An output parser that returns structured information. It is less powerful than other output parsers since it only allows for fields to be strings. This can be useful when you are working with smaller LLMs. |
### Chat history
Most LLM applications have a conversational interface.
An essential component of a conversation is being able to refer to information introduced earlier in the conversation.
At bare minimum, a conversational system should be able to access some window of past messages directly.
The concept of `ChatHistory` refers to a class in LangChain which can be used to wrap an arbitrary chain.
This `ChatHistory` will keep track of inputs and outputs of the underlying chain, and append them as messages to a message database
Future interactions will then load those messages and pass them into the chain as part of the input.
### Documents
A Document object in LangChain contains information about some data. It has two attributes:
- `page_content: str`: The content of this document. Currently is only a string.
- `metadata: dict`: Arbitrary metadata associated with this document. Can track the document id, file name, etc.
### Document loaders
These classes load Document objects. LangChain has hundreds of integrations with various data sources to load data from: Slack, Notion, Google Drive, etc.
Each DocumentLoader has its own specific parameters, but they can all be invoked in the same way with the `.load` method.
An example use case is as follows:
```python
from langchain_community.document_loaders.csv_loader import CSVLoader
loader = CSVLoader(
... # <-- Integration specific parameters here
)
data = loader.load()
```
### Text splitters
Once you've loaded documents, you'll often want to transform them to better suit your application. The simplest example is you may want to split a long document into smaller chunks that can fit into your model's context window. LangChain has a number of built-in document transformers that make it easy to split, combine, filter, and otherwise manipulate documents.
When you want to deal with long pieces of text, it is necessary to split up that text into chunks. As simple as this sounds, there is a lot of potential complexity here. Ideally, you want to keep the semantically related pieces of text together. What "semantically related" means could depend on the type of text. This notebook showcases several ways to do that.
At a high level, text splitters work as following:
1. Split the text up into small, semantically meaningful chunks (often sentences).
2. Start combining these small chunks into a larger chunk until you reach a certain size (as measured by some function).
3. Once you reach that size, make that chunk its own piece of text and then start creating a new chunk of text with some overlap (to keep context between chunks).
That means there are two different axes along which you can customize your text splitter:
1. How the text is split
2. How the chunk size is measured
### Embedding models
The Embeddings class is a class designed for interfacing with text embedding models. There are lots of embedding model providers (OpenAI, Cohere, Hugging Face, etc) - this class is designed to provide a standard interface for all of them.
Embeddings create a vector representation of a piece of text. This is useful because it means we can think about text in the vector space, and do things like semantic search where we look for pieces of text that are most similar in the vector space.
The base Embeddings class in LangChain provides two methods: one for embedding documents and one for embedding a query. The former takes as input multiple texts, while the latter takes a single text. The reason for having these as two separate methods is that some embedding providers have different embedding methods for documents (to be searched over) vs queries (the search query itself).
### Vector stores
One of the most common ways to store and search over unstructured data is to embed it and store the resulting embedding vectors,
and then at query time to embed the unstructured query and retrieve the embedding vectors that are 'most similar' to the embedded query.
A vector store takes care of storing embedded data and performing vector search for you.
Vector stores can be converted to the retriever interface by doing:
```python
vectorstore = MyVectorStore()
retriever = vectorstore.as_retriever()
```
### Retrievers
A retriever is an interface that returns documents given an unstructured query.
It is more general than a vector store.
A retriever does not need to be able to store documents, only to return (or retrieve) them.
Retrievers can be created from vectorstores, but are also broad enough to include [Wikipedia search](/docs/integrations/retrievers/wikipedia/) and [Amazon Kendra](/docs/integrations/retrievers/amazon_kendra_retriever/).
Retrievers accept a string query as input and return a list of Document's as output.
### Tools
Tools are interfaces that an agent, a chain, or a chat model / LLM can use to interact with the world.
A tool consists of the following components:
1. The name of the tool
2. A description of what the tool does
3. JSON schema of what the inputs to the tool are
4. The function to call
5. Whether the result of a tool should be returned directly to the user (only relevant for agents)
The name, description and JSON schema are provided as context
to the LLM, allowing the LLM to determine how to use the tool
appropriately.
Given a list of available tools and a prompt, an LLM can request
that one or more tools be invoked with appropriate arguments.
Generally, when designing tools to be used by a chat model or LLM, it is important to keep in mind the following:
- Chat models that have been fine-tuned for tool calling will be better at tool calling than non-fine-tuned models.
- Non fine-tuned models may not be able to use tools at all, especially if the tools are complex or require multiple tool calls.
- Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas.
- Simpler tools are generally easier for models to use than more complex tools.
### Toolkits
Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.
All Toolkits expose a `get_tools` method which returns a list of tools.
You can therefore do:
```python
# Initialize a toolkit
toolkit = ExampleTookit(...)
# Get list of tools
tools = toolkit.get_tools()
```
### Agents
By themselves, language models can't take actions - they just output text.
A big use case for LangChain is creating **agents**.
Agents are systems that use an LLM as a reasoning enginer to determine which actions to take and what the inputs to those actions should be.
The results of those actions can then be fed back into the agent and it determine whether more actions are needed, or whether it is okay to finish.
[LangGraph](https://github.com/langchain-ai/langgraph) is an extension of LangChain specifically aimed at creating highly controllable and customizable agents.
Please check out that documentation for a more in depth overview of agent concepts.
There is a legacy agent concept in LangChain that we are moving towards deprecating: `AgentExecutor`.
AgentExecutor was essentially a runtime for agents.
It was a great place to get started, however, it was not flexible enough as you started to have more customized agents.
In order to solve that we built LangGraph to be this flexible, highly-controllable runtime.
If you are still using AgentExecutor, do not fear: we still have a guide on [how to use AgentExecutor](/docs/how_to/agent_executor).
It is recommended, however, that you start to transition to LangGraph.
In order to assist in this we have put together a [transition guide on how to do so](/docs/how_to/migrate_agent)
### Multimodal
Some models are multimodal, accepting images, audio and even video as inputs. These are still less common, meaning model providers haven't standardized on the "best" way to define the API. Multimodal **outputs** are even less common. As such, we've kept our multimodal abstractions fairly light weight and plan to further solidify the multimodal APIs and interaction patterns as the field matures.
In LangChain, most chat models that support multimodal inputs also accept those values in OpenAI's content blocks format. So far this is restricted to image inputs. For models like Gemini which support video and other bytes input, the APIs also support the native, model-specific representations.
### Callbacks
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
You can subscribe to these events by using the `callbacks` argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail.
#### Callback Events
| Event | Event Trigger | Associated Method |
|------------------|---------------------------------------------|-----------------------|
| Chat model start | When a chat model starts | `on_chat_model_start` |
| LLM start | When a llm starts | `on_llm_start` |
| LLM new token | When an llm OR chat model emits a new token | `on_llm_new_token` |
| LLM ends | When an llm OR chat model ends | `on_llm_end` |
| LLM errors | When an llm OR chat model errors | `on_llm_error` |
| Chain start | When a chain starts running | `on_chain_start` |
| Chain end | When a chain ends | `on_chain_end` |
| Chain error | When a chain errors | `on_chain_error` |
| Tool start | When a tool starts running | `on_tool_start` |
| Tool end | When a tool ends | `on_tool_end` |
| Tool error | When a tool errors | `on_tool_error` |
| Agent action | When an agent takes an action | `on_agent_action` |
| Agent finish | When an agent ends | `on_agent_finish` |
| Retriever start | When a retriever starts | `on_retriever_start` |
| Retriever end | When a retriever ends | `on_retriever_end` |
| Retriever error | When a retriever errors | `on_retriever_error` |
| Text | When arbitrary text is run | `on_text` |
| Retry | When a retry event is run | `on_retry` |
#### Callback handlers
Callback handlers can either be `sync` or `async`:
* Sync callback handlers implement the [BaseCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.BaseCallbackHandler.html) interface.
* Async callback handlers implement the [AsyncCallbackHandler](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) interface.
During run-time LangChain configures an appropriate callback manager (e.g., [CallbackManager](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.CallbackManager.html) or [AsyncCallbackManager](https://api.python.langchain.com/en/latest/callbacks/langchain_core.callbacks.manager.AsyncCallbackManager.html) which will be responsible for calling the appropriate method on each "registered" callback handler when the event is triggered.
#### Passing callbacks
The `callbacks` property is available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places:
The callbacks are available on most objects throughout the API (Models, Tools, Agents, etc.) in two different places:
- **Request time callbacks**: Passed at the time of the request in addition to the input data.
Available on all standard `Runnable` objects. These callbacks are INHERITED by all children
of the object they are defined on. For example, `chain.invoke({"number": 25}, {"callbacks": [handler]})`.
- **Constructor callbacks**: `chain = TheNameOfSomeChain(callbacks=[handler])`. These callbacks
are passed as arguments to the constructor of the object. The callbacks are scoped
only to the object they are defined on, and are **not** inherited by any children of the object.
:::warning
Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children
of the object.
:::
If you're creating a custom chain or runnable, you need to remember to propagate request time
callbacks to any child objects.
:::important Async in Python<=3.10
Any `RunnableLambda`, a `RunnableGenerator`, or `Tool` that invokes other runnables
and is running async in python<=3.10, will have to propagate callbacks to child
objects manually. This is because LangChain cannot automatically propagate
callbacks to child objects in this case.
This is a common reason why you may fail to see events being emitted from custom
runnables or tools.
:::
## Techniques
### Function/tool calling
:::info
We use the term tool calling interchangeably with function calling. Although
function calling is sometimes meant to refer to invocations of a single function,
we treat all models as though they can return multiple tool or function calls in
each message.
:::
Tool calling allows a model to respond to a given prompt by generating output that
matches a user-defined schema. While the name implies that the model is performing
some action, this is actually not the case! The model is coming up with the
arguments to a tool, and actually running the tool (or not) is up to the user -
for example, if you want to [extract output matching some schema](/docs/tutorials/extraction)
from unstructured text, you could give the model an "extraction" tool that takes
parameters matching the desired schema, then treat the generated output as your final
result.
A tool call includes a name, arguments dict, and an optional identifier. The
arguments dict is structured `{argument_name: argument_value}`.
Many LLM providers, including [Anthropic](https://www.anthropic.com/),
[Cohere](https://cohere.com/), [Google](https://cloud.google.com/vertex-ai),
[Mistral](https://mistral.ai/), [OpenAI](https://openai.com/), and others,
support variants of a tool calling feature. These features typically allow requests
to the LLM to include available tools and their schemas, and for responses to include
calls to these tools. For instance, given a search engine tool, an LLM might handle a
query by first issuing a call to the search engine. The system calling the LLM can
receive the tool call, execute it, and return the output to the LLM to inform its
response. LangChain includes a suite of [built-in tools](/docs/integrations/tools/)
and supports several methods for defining your own [custom tools](/docs/how_to/custom_tools).
LangChain provides a standardized interface for tool calling that is consistent across different models.
The standard interface consists of:
* `ChatModel.bind_tools()`: a method for specifying which tools are available for a model to call.
* `AIMessage.tool_calls`: an attribute on the `AIMessage` returned from the model for accessing the tool calls requested by the model.
There are two main use cases for function/tool calling:
- [How to return structured data from an LLM](/docs/how_to/structured_output/)
- [How to use a model to call tools](/docs/how_to/tool_calling/)
### Retrieval
LangChain provides several advanced retrieval types. A full list is below, along with the following information:
**Name**: Name of the retrieval algorithm.
**Index Type**: Which index type (if any) this relies on.
**Uses an LLM**: Whether this retrieval method uses an LLM.
**When to Use**: Our commentary on when you should considering using this retrieval method.
**Description**: Description of what this retrieval algorithm is doing.
| Name | Index Type | Uses an LLM | When to Use | Description |
|---------------------------|------------------------------|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Vectorstore](/docs/how_to/vectorstore_retriever/) | Vectorstore | No | If you are just getting started and looking for something quick and easy. | This is the simplest method and the one that is easiest to get started with. It involves creating embeddings for each piece of text. |
| [ParentDocument](/docs/how_to/parent_document_retriever/) | Vectorstore + Document Store | No | If your pages have lots of smaller pieces of distinct information that are best indexed by themselves, but best retrieved all together. | This involves indexing multiple chunks for each document. Then you find the chunks that are most similar in embedding space, but you retrieve the whole parent document and return that (rather than individual chunks). |
| [Multi Vector](/docs/how_to/multi_vector/) | Vectorstore + Document Store | Sometimes during indexing | If you are able to extract information from documents that you think is more relevant to index than the text itself. | This involves creating multiple vectors for each document. Each vector could be created in a myriad of ways - examples include summaries of the text and hypothetical questions. |
| [Self Query](/docs/how_to/self_query/) | Vectorstore | Yes | If users are asking questions that are better answered by fetching documents based on metadata rather than similarity with the text. | This uses an LLM to transform user input into two things: (1) a string to look up semantically, (2) a metadata filer to go along with it. This is useful because oftentimes questions are about the METADATA of documents (not the content itself). |
| [Contextual Compression](/docs/how_to/contextual_compression/) | Any | Sometimes | If you are finding that your retrieved documents contain too much irrelevant information and are distracting the LLM. | This puts a post-processing step on top of another retriever and extracts only the most relevant information from retrieved documents. This can be done with embeddings or an LLM. |
| [Time-Weighted Vectorstore](/docs/how_to/time_weighted_vectorstore/) | Vectorstore | No | If you have timestamps associated with your documents, and you want to retrieve the most recent ones | This fetches documents based on a combination of semantic similarity (as in normal vector retrieval) and recency (looking at timestamps of indexed documents) |
| [Multi-Query Retriever](/docs/how_to/MultiQueryRetriever/) | Any | Yes | If users are asking questions that are complex and require multiple pieces of distinct information to respond | This uses an LLM to generate multiple queries from the original one. This is useful when the original query needs pieces of information about multiple topics to be properly answered. By generating multiple queries, we can then fetch documents for each of them. |
| [Ensemble](/docs/how_to/ensemble_retriever/) | Any | No | If you have multiple retrieval methods and want to try combining them. | This fetches documents from multiple retrievers and then combines them. |
### Text splitting
LangChain offers many different types of `text splitters`.
These all live in the `langchain-text-splitters` package.
Table columns:
- **Name**: Name of the text splitter
- **Classes**: Classes that implement this text splitter
- **Splits On**: How this text splitter splits text
- **Adds Metadata**: Whether or not this text splitter adds metadata about where each chunk came from.
- **Description**: Description of the splitter, including recommendation on when to use it.
| Name | Classes | Splits On | Adds Metadata | Description |
|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Recursive | [RecursiveCharacterTextSplitter](/docs/how_to/recursive_text_splitter/), [RecursiveJsonSplitter](/docs/how_to/recursive_json_splitter/) | A list of user defined characters | | Recursively splits text. This splitting is trying to keep related pieces of text next to each other. This is the `recommended way` to start splitting text. |
| HTML | [HTMLHeaderTextSplitter](/docs/how_to/HTML_header_metadata_splitter/), [HTMLSectionSplitter](/docs/how_to/HTML_section_aware_splitter/) | HTML specific characters | ✅ | Splits text based on HTML-specific characters. Notably, this adds in relevant information about where that chunk came from (based on the HTML) |
| Markdown | [MarkdownHeaderTextSplitter](/docs/how_to/markdown_header_metadata_splitter/), | Markdown specific characters | ✅ | Splits text based on Markdown-specific characters. Notably, this adds in relevant information about where that chunk came from (based on the Markdown) |
| Code | [many languages](/docs/how_to/code_splitter/) | Code (Python, JS) specific characters | | Splits text based on characters specific to coding languages. 15 different languages are available to choose from. |
| Token | [many classes](/docs/how_to/split_by_token/) | Tokens | | Splits text on tokens. There exist a few different ways to measure tokens. |
| Character | [CharacterTextSplitter](/docs/how_to/character_text_splitter/) | A user defined character | | Splits text based on a user defined character. One of the simpler methods. |
| Semantic Chunker (Experimental) | [SemanticChunker](/docs/how_to/semantic-chunker/) | Sentences | | First splits on sentences. Then combines ones next to each other if they are semantically similar enough. Taken from [Greg Kamradt](https://github.com/FullStackRetrieval-com/RetrievalTutorials/blob/main/tutorials/LevelsOfTextSplitting/5_Levels_Of_Text_Splitting.ipynb) |
| Integration: AI21 Semantic | [AI21SemanticTextSplitter](/docs/integrations/document_transformers/ai21_semantic_text_splitter/) | ✅ | Identifies distinct topics that form coherent pieces of text and splits along those. |

View File

@@ -98,7 +98,7 @@ To run unit tests in Docker:
make docker_tests
```
There are also [integration tests and code-coverage](./testing) available.
There are also [integration tests and code-coverage](/docs/contributing/testing/) available.
### Only develop langchain_core or langchain_experimental

View File

@@ -1,7 +1,4 @@
---
sidebar_position: 3
---
# Contribute Documentation
# Technical logistics
LangChain documentation consists of two components:
@@ -74,6 +71,8 @@ make docs_clean
make api_docs_clean
```
Next, you can build the documentation as outlined below:
```bash
@@ -81,6 +80,18 @@ make docs_build
make api_docs_build
```
:::tip
The `make api_docs_build` command takes a long time. If you're making cosmetic changes to the API docs and want to see how they look, use:
```bash
make api_docs_quick_preview
```
which will just build a small subset of the API reference.
:::
Finally, run the link checker to ensure all links are valid:
```bash

View File

@@ -0,0 +1,35 @@
---
sidebar_class_name: hidden
---
# Contributor how-to guides
This section contains guides for contributors to the project. If you're looking to contribute to the project, this is a good place to start.
This content is heavily inspired by scikit-learn's [contributing guide](https://scikit-learn.org/dev/developers/contributing.html).
## Getting started
- How to: create a minimal reproducible example
- How to: find a package's source code
- How to: set up the codebase
- How to: lint and format documentation and code
- [How to: run tests](./testing.mdx): Learn how to run tests in all of our packages
- How to: open a pull request
## Contributing documentation
- How to: build the documentation locally
- How to: edit markdown (.md, .mdx) documentation
- How to: use code tabs in markdown
- How to: edit Jupyter notebook (.ipynb) documentation
- How to: add images to documentation
## Contributing code
- How to: find a bug to fix
- How to:
## Old stuff
- [Documentation](./documentation.mdx): Learn how to contribute to the documentation.
- [Code](./code.mdx): Learn how to develop in the LangChain codebase.
- [Integrations](./integrations.mdx): Learn how to contribute to third-party integrations to the LangChain ecosystem.

View File

@@ -3,7 +3,7 @@ sidebar_position: 5
---
# Contribute Integrations
To begin, make sure you have all the dependencies outlined in guide on [Contributing Code](./code).
To begin, make sure you have all the dependencies outlined in guide on [Contributing Code](/docs/contributing/code/).
There are a few different places you can contribute integrations for LangChain:
@@ -133,7 +133,7 @@ By default, this will include stubs for a Chat Model, an LLM, and/or a Vector St
Some basic tests are presented in the `tests/` directory. You should add more tests to cover your package's functionality.
For information on running and implementing tests, see the [Testing guide](./testing).
For information on running and implementing tests, see the [Testing guide](/docs/contributing/testing/).
### Write documentation
@@ -190,12 +190,9 @@ Maintainer steps (Contributors should **not** do these):
## Partner package in external repo
If you are creating a partner package in an external repo, you should follow the same steps as above,
but you will need to set up your own CI/CD and package management.
Partner packages in external repos must be coordinated between the LangChain team and
the partner organization to ensure that they are maintained and updated.
Name your package as `langchain-{partner}-{integration}`.
Still, you have to create the `libs/partners/{partner}-{integration}` folder in the `LangChain` monorepo
and add a `README.md` file with a link to the external repo.
See this [example](https://github.com/langchain-ai/langchain/tree/master/libs/partners/google-genai).
This allows keeping track of all the partner packages in the `LangChain` documentation.
If you're interested in creating a partner package in an external repo, please start
with one in the LangChain repo, and then reach out to the LangChain team to discuss
how to move it to an external repo.

View File

@@ -0,0 +1,19 @@
---
sidebar_label: Minimal Reproducible Examples
sidebar_position: 0
---
# How to craft a minimal reproducible example for LangChain
:::info
This guide is a LangChain-specific adaptation of sklearn's [minimal reproducer guide](https://scikit-learn.org/dev/developers/minimal_reproducer.html).
:::
Whether submitting a bug report, designing a suite of tests, or simply posting a question in the discussions, being able to craft minimal, reproducible examples (or minimal, workable examples) is the key to communicating effectively and efficiently with the community.
There are very good guidelines on the internet such as this [StackOverflow document](https://stackoverflow.com/help/minimal-reproducible-example) or [this blogpost by Matthew Rocklin](https://matthewrocklin.com/minimal-bug-reports) on crafting Minimal Complete Verifiable Examples (referred below as MCVE). Our goal is not to be repetitive with those references but rather to provide a step-by-step guide on how to narrow down a bug until you have reached the shortest possible code to reproduce it.
The first step before submitting a bug report to LangChain is to read the Issue template. It is already quite informative about the information you will be asked to provide.
## Good practices

View File

@@ -1,54 +1,10 @@
---
sidebar_position: 0
---
# Welcome Contributors
# Contributing guide
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.
LangChain is an open-source project written by an amazing community of developers and maintained by the team at [LangChain AI](https://www.langchain.com).
## 🗺️ Guidelines
There are many ways to contribute to LangChain, and please read this guide to learn how you can most effectively contribute to the project!
### 👩‍💻 Ways to contribute
For existing contributors, we have a [Contributor Cheat Sheet](./reference/cheat_sheet.mdx) as quick reference.
There are many ways to contribute to LangChain. Here are some common ways people contribute:
## How to contribute
- [**Documentation**](./documentation.mdx): Help improve our docs, including this one!
- [**Code**](./code.mdx): Help us write code, fix bugs, or improve our infrastructure.
- [**Integrations**](integrations.mdx): Help us integrate with your favorite vendors and tools.
- [**Discussions**](https://github.com/langchain-ai/langchain/discussions): Help answer usage questions and discuss issues with users.
### 🚩 GitHub Issues
Our [issues](https://github.com/langchain-ai/langchain/issues) page is kept up to date with bugs, improvements, and feature requests.
There is a taxonomy of labels to help with sorting and discovery of issues of interest. Please use these to help organize issues.
If you start working on an issue, please assign it to yourself.
If you are adding an issue, please try to keep it focused on a single, modular bug/improvement/feature.
If two issues are related, or blocking, please link them rather than combining them.
We will try to keep these issues as up-to-date as possible, though
with the rapid rate of development in this field some may get out of date.
If you notice this happening, please let us know.
### 💭 GitHub Discussions
We have a [discussions](https://github.com/langchain-ai/langchain/discussions) page where users can ask usage questions, discuss design decisions, and propose new features.
If you are able to help answer questions, please do so! This will allow the maintainers to spend more time focused on development and bug fixing.
### 🙋 Getting Help
Our goal is to have the simplest developer setup possible. Should you experience any difficulty getting setup, please
contact a maintainer! Not only do we want to help get you unblocked, but we also want to make sure that the process is
smooth for future contributors.
In a similar vein, we do enforce certain linting, formatting, and documentation standards in the codebase.
If you are finding these difficult (or even just annoying) to work with, feel free to contact a maintainer for help -
we do not want these to get in the way of getting good code into the codebase.
# 🌟 Recognition
If your contribution has made its way into a release, we will want to give you credit on Twitter (only if you want though)!
If you have a Twitter account you would like us to mention, please let us know in the PR or through another means.

View File

@@ -0,0 +1,16 @@
---
sidebar_label: "Cheat sheet"
sidebar_position: 1
---
# Contributing cheat sheet
## Pull request lifecycle
1. Open a **draft** pull request
2. Merge in latest master ("Update branch")
3. Confirm that the CI checks pass
4. Mark "Ready to review" to convert from draft to full PR
5. Wait for a review.
6. Address review comments within 72 hours. In the event of no activity for 72 hours, the PR will be closed (but you can reopen).
##

View File

@@ -0,0 +1,138 @@
---
sidebar_label: "Style guide"
---
# LangChain Documentation Style Guide
## Introduction
As LangChain continues to grow, the surface area of documentation required to cover it continues to grow too.
This page provides guidelines for anyone writing documentation for LangChain, as well as some of our philosophies around
organization and structure.
## Philosophy
LangChain's documentation aspires to follow the [Diataxis framework](https://diataxis.fr).
Under this framework, all documentation falls under one of four categories:
- **Tutorials**: Lessons that take the reader by the hand through a series of conceptual steps to complete a project.
- An example of this is our [LCEL streaming guide](/docs/how_to/streaming).
- Our guides on [custom components](/docs/how_to/custom_chat_model) is another one.
- **How-to guides**: Guides that take the reader through the steps required to solve a real-world problem.
- The clearest examples of this are our [Use case](/docs/how_to#use-cases) quickstart pages.
- **Reference**: Technical descriptions of the machinery and how to operate it.
- Our [Runnable interface](/docs/concepts#interface) page is an example of this.
- The [API reference pages](https://api.python.langchain.com/) are another.
- **Explanation**: Explanations that clarify and illuminate a particular topic.
- The [LCEL primitives pages](/docs/how_to/sequence) are an example of this.
Each category serves a distinct purpose and requires a specific approach to writing and structuring the content.
## Taxonomy
Keeping the above in mind, we have sorted LangChain's docs into categories. It is helpful to think in these terms
when contributing new documentation:
### Getting started
The [getting started section](/docs/introduction) includes a high-level introduction to LangChain, a quickstart that
tours LangChain's various features, and logistical instructions around installation and project setup.
It contains elements of **How-to guides** and **Explanations**.
### Use cases
[Use cases](/docs/how_to#use-cases) are guides that are meant to show how to use LangChain to accomplish a specific task (RAG, information extraction, etc.).
The quickstarts should be good entrypoints for first-time LangChain developers who prefer to learn by getting something practical prototyped,
then taking the pieces apart retrospectively. These should mirror what LangChain is good at.
The quickstart pages here should fit the **How-to guide** category, with the other pages intended to be **Explanations** of more
in-depth concepts and strategies that accompany the main happy paths.
:::note
The below sections are listed roughly in order of increasing level of abstraction.
:::
### Expression Language
[LangChain Expression Language (LCEL)](/docs/concepts#langchain-expression-language) is the fundamental way that most LangChain components fit together, and this section is designed to teach
developers how to use it to build with LangChain's primitives effectively.
This section should contains **Tutorials** that teach how to stream and use LCEL primitives for more abstract tasks, **Explanations** of specific behaviors,
and some **References** for how to use different methods in the Runnable interface.
### Components
The [components section](/docs/concepts) covers concepts one level of abstraction higher than LCEL.
Abstract base classes like `BaseChatModel` and `BaseRetriever` should be covered here, as well as core implementations of these base classes,
such as `ChatPromptTemplate` and `RecursiveCharacterTextSplitter`. Customization guides belong here too.
This section should contain mostly conceptual **Tutorials**, **References**, and **Explanations** of the components they cover.
:::note
As a general rule of thumb, everything covered in the `Expression Language` and `Components` sections (with the exception of the `Composition` section of components) should
cover only components that exist in `langchain_core`.
:::
### Integrations
The [integrations](/docs/integrations/platforms/) are specific implementations of components. These often involve third-party APIs and services.
If this is the case, as a general rule, these are maintained by the third-party partner.
This section should contain mostly **Explanations** and **References**, though the actual content here is more flexible than other sections and more at the
discretion of the third-party provider.
:::note
Concepts covered in `Integrations` should generally exist in `langchain_community` or specific partner packages.
:::
### Guides and Ecosystem
The [Guides](/docs/tutorials) and [Ecosystem](https://docs.smith.langchain.com/) sections should contain guides that address higher-level problems than the sections above.
This includes, but is not limited to, considerations around productionization and development workflows.
These should contain mostly **How-to guides**, **Explanations**, and **Tutorials**.
### API references
LangChain's API references. Should act as **References** (as the name implies) with some **Explanation**-focused content as well.
## Sample developer journey
We have set up our docs to assist a new developer to LangChain. Let's walk through the intended path:
- The developer lands on https://python.langchain.com, and reads through the introduction and the diagram.
- If they are just curious, they may be drawn to the [Quickstart](/docs/tutorials/llm_chain) to get a high-level tour of what LangChain contains.
- If they have a specific task in mind that they want to accomplish, they will be drawn to the Use-Case section. The use-case should provide a good, concrete hook that shows the value LangChain can provide them and be a good entrypoint to the framework.
- They can then move to learn more about the fundamentals of LangChain through the Expression Language sections.
- Next, they can learn about LangChain's various components and integrations.
- Finally, they can get additional knowledge through the Guides.
This is only an ideal of course - sections will inevitably reference lower or higher-level concepts that are documented in other sections.
## Guidelines
Here are some other guidelines you should think about when writing and organizing documentation.
### Linking to other sections
Because sections of the docs do not exist in a vacuum, it is important to link to other sections as often as possible
to allow a developer to learn more about an unfamiliar topic inline.
This includes linking to the API references as well as conceptual sections!
### Conciseness
In general, take a less-is-more approach. If a section with a good explanation of a concept already exists, you should link to it rather than
re-explain it, unless the concept you are documenting presents some new wrinkle.
Be concise, including in code samples.
### General style
- Use active voice and present tense whenever possible.
- Use examples and code snippets to illustrate concepts and usage.
- Use appropriate header levels (`#`, `##`, `###`, etc.) to organize the content hierarchically.
- Use bullet points and numbered lists to break down information into easily digestible chunks.
- Use tables (especially for **Reference** sections) and diagrams often to present information visually.
- Include the table of contents for longer documentation pages to help readers navigate the content, but hide it for shorter pages.

View File

@@ -0,0 +1,11 @@
---
sidebar_class_name: hidden
---
# Contributor reference
This section contains reference material for contributors to the project.
- [Cheat sheet](./cheat_sheet.mdx): A quick reference for contributors.
- [Repo structure](./repo_structure.mdx): Learn about the structure of the LangChain repository.
- [Documentation style guide](./documentation_style_guide.mdx): Learn how to write documentation for LangChain.
- [FAQ](./faq.mdx): Frequently asked questions about contributing to LangChain.

View File

@@ -6,7 +6,7 @@ sidebar_position: 0.5
If you plan on contributing to LangChain code or documentation, it can be useful
to understand the high level structure of the repository.
LangChain is organized as a [monorep](https://en.wikipedia.org/wiki/Monorepo) that contains multiple packages.
LangChain is organized as a [monorepo](https://en.wikipedia.org/wiki/Monorepo) that contains multiple packages.
Here's the structure visualized as a tree:
@@ -41,7 +41,7 @@ There are other files in the root directory level, but their presence should be
The `/docs` directory contains the content for the documentation that is shown
at https://python.langchain.com/ and the associated API Reference https://api.python.langchain.com/en/latest/langchain_api_reference.html.
See the [documentation](./documentation) guidelines to learn how to contribute to the documentation.
See the [documentation](/docs/contributing/documentation/style_guide) guidelines to learn how to contribute to the documentation.
## Code
@@ -49,6 +49,6 @@ The `/libs` directory contains the code for the LangChain packages.
To learn more about how to contribute code see the following guidelines:
- [Code](./code.mdx) Learn how to develop in the LangChain codebase.
- [Integrations](./integrations.mdx) to learn how to contribute to third-party integrations to langchain-community or to start a new partner package.
- [Testing](./testing.mdx) guidelines to learn how to write tests for the packages.
- [Code](../how_to/code.mdx) Learn how to develop in the LangChain codebase.
- [Integrations](../how_to/integrations.mdx) to learn how to contribute to third-party integrations to langchain-community or to start a new partner package.
- [Testing](../how_to/testing.mdx) guidelines to learn how to write tests for the packages.

View File

@@ -1,205 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e89f490d",
"metadata": {},
"source": [
"# Agents\n",
"\n",
"You can pass a Runnable into an agent. Make sure you have `langchainhub` installed: `pip install langchainhub`"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "af4381de",
"metadata": {},
"outputs": [],
"source": [
"from langchain import hub\n",
"from langchain.agents import AgentExecutor, tool\n",
"from langchain.agents.output_parsers import XMLAgentOutputParser\n",
"from langchain_community.chat_models import ChatAnthropic"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "24cc8134",
"metadata": {},
"outputs": [],
"source": [
"model = ChatAnthropic(model=\"claude-2\")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "67c0b0e4",
"metadata": {},
"outputs": [],
"source": [
"@tool\n",
"def search(query: str) -> str:\n",
" \"\"\"Search things about current events.\"\"\"\n",
" return \"32 degrees\""
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7203b101",
"metadata": {},
"outputs": [],
"source": [
"tool_list = [search]"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "b68e756d",
"metadata": {},
"outputs": [],
"source": [
"# Get the prompt to use - you can modify this!\n",
"prompt = hub.pull(\"hwchase17/xml-agent-convo\")"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "61ab3e9a",
"metadata": {},
"outputs": [],
"source": [
"# Logic for going from intermediate steps to a string to pass into model\n",
"# This is pretty tied to the prompt\n",
"def convert_intermediate_steps(intermediate_steps):\n",
" log = \"\"\n",
" for action, observation in intermediate_steps:\n",
" log += (\n",
" f\"<tool>{action.tool}</tool><tool_input>{action.tool_input}\"\n",
" f\"</tool_input><observation>{observation}</observation>\"\n",
" )\n",
" return log\n",
"\n",
"\n",
"# Logic for converting tools to string to go in prompt\n",
"def convert_tools(tools):\n",
" return \"\\n\".join([f\"{tool.name}: {tool.description}\" for tool in tools])"
]
},
{
"cell_type": "markdown",
"id": "260f5988",
"metadata": {},
"source": [
"Building an agent from a runnable usually involves a few things:\n",
"\n",
"1. Data processing for the intermediate steps. These need to be represented in a way that the language model can recognize them. This should be pretty tightly coupled to the instructions in the prompt\n",
"\n",
"2. The prompt itself\n",
"\n",
"3. The model, complete with stop tokens if needed\n",
"\n",
"4. The output parser - should be in sync with how the prompt specifies things to be formatted."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "e92f1d6f",
"metadata": {},
"outputs": [],
"source": [
"agent = (\n",
" {\n",
" \"input\": lambda x: x[\"input\"],\n",
" \"agent_scratchpad\": lambda x: convert_intermediate_steps(\n",
" x[\"intermediate_steps\"]\n",
" ),\n",
" }\n",
" | prompt.partial(tools=convert_tools(tool_list))\n",
" | model.bind(stop=[\"</tool_input>\", \"</final_answer>\"])\n",
" | XMLAgentOutputParser()\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "6ce6ec7a",
"metadata": {},
"outputs": [],
"source": [
"agent_executor = AgentExecutor(agent=agent, tools=tool_list, verbose=True)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "fb5cb2e3",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"\n",
"\u001b[1m> Entering new AgentExecutor chain...\u001b[0m\n",
"\u001b[32;1m\u001b[1;3m <tool>search</tool><tool_input>weather in New York\u001b[0m\u001b[36;1m\u001b[1;3m32 degrees\u001b[0m\u001b[32;1m\u001b[1;3m <tool>search</tool>\n",
"<tool_input>weather in New York\u001b[0m\u001b[36;1m\u001b[1;3m32 degrees\u001b[0m\u001b[32;1m\u001b[1;3m <final_answer>The weather in New York is 32 degrees\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
},
{
"data": {
"text/plain": [
"{'input': 'whats the weather in New york?',\n",
" 'output': 'The weather in New York is 32 degrees'}"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"agent_executor.invoke({\"input\": \"whats the weather in New york?\"})"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "bce86dd8",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

Some files were not shown because too many files have changed in this diff Show More