Compare commits

...

307 Commits

Author SHA1 Message Date
Harrison Chase
7414e9d196 bump version to 206 (#6465) 2023-06-19 23:05:09 -07:00
Hubert
22601b0b63 fix neo4j schema query (#6381)
Fix issue #6380 

<!-- Remove if not applicable -->

Fixes #6380  (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:
@hwchase17

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: HubertKl <HubertKl>
2023-06-19 22:48:35 -07:00
Gavin
b0d80c4b3e Update serpapi.py Support baidu list type answer_box (#6386)
Support baidu list type answer_box

From [this document](https://serpapi.com/baidu-answer-box), we can know
that the answer_box attribute returned by the Baidu interface is a list,
and the list contains only one Object, but an error will occur when the
current code is executed.

So when answer_box is a list, we reset res["answer_box"] so that the
code can execute successfully.
2023-06-19 22:48:18 -07:00
Bryce Drennan
384fa43fc3 fix: llm caching for replicate (#6396)
Caching wasn't accounting for which model was used so a result for the
first executed model would return for the same prompt on a different
model.

This was because `Replicate._identifying_params` did not include the
`model` parameter.

FYI
- @cbh123
- @hwchase17
- @agola11
2023-06-19 22:47:59 -07:00
Zeeland
8a604b93ab feat: use latest duckduckgo_search API to call (#6409)
# Provider the latest duckduckgo_search API

The Git commit contents involve two files related to some DuckDuckGo
query operations, and an upgrade of the DuckDuckGo module to version
3.8.3. A suitable commit message could be "Upgrade DuckDuckGo module to
version 3.8.3, including query operations". Specifically, in the
duckduckgo_search.py file, a DDGS() class instance is newly added to
replace the previous ddg() function, and the time parameter name in the
get_snippets() and results() methods is changed from "time" to
"timelimit" to accommodate recent changes. In the pyproject.toml file,
the duckduckgo-search module is upgraded to version 3.8.3.

[duckduckgo_search readme
attention](https://github.com/deedy5/duckduckgo_search): Versions before
v2.9.4 no longer work as of May 12, 2023

## Who can review?

@vowelparrot

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-19 22:47:39 -07:00
Harrison Chase
9eec7c3206 Harrison/unstructured page number (#6464)
Co-authored-by: Reza Sanaie <reza@sanaie.ca>
2023-06-19 22:31:43 -07:00
Alonso Silva Allende
b82ddf9cfb Improve error message (#6275)
Trying to use OpenAI models like 'text-davinci-002' or
'text-davinci-003' the agent doesn't work and the message is 'Only
supported with OpenAI models.' The error message should be 'Only
supported with ChatOpenAI models.'

My Twitter handle is @alonsosilva
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:
@hwchase17

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

Co-authored-by: SILVA Alonso <alonso.silva@nokia-bell-labs.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-19 22:21:01 -07:00
zengbo
7e5f5ebf86 Fix the issue where ANTHROPIC_API_URL set in environment is not takin… (#6400)
I apologize for the error: the 'ANTHROPIC_API_URL' environment variable
doesn't take effect if the 'anthropic_api_url' parameter has a default
value.

#### Who can review?
  Models
  - @hwchase17
  - @agola11
2023-06-19 22:20:36 -07:00
Grayson Adkins
9f5f747dc3 Fix broken links in autonomous agents docs (#6398)
Fixes broken links here:  
https://python.langchain.com/docs/use_cases/autonomous_agents.html

#### Who can review?

Tag maintainers/contributors who might be interested:

  Agents / Tools / Toolkits
  - @hwchase17
2023-06-19 22:20:00 -07:00
volodymyr-memsql
d2e9b621ab Update SinglStoreDB vectorstore (#6423)
1. Introduced new distance strategies support: **DOT_PRODUCT** and
**EUCLIDEAN_DISTANCE** for enhanced flexibility.
2. Implemented a feature to filter results based on metadata fields.
3. Incorporated connection attributes specifying "langchain python sdk"
usage for enhanced traceability and debugging.
4. Expanded the suite of integration tests for improved code
reliability.
5. Updated the existing notebook with the usage example

@dev2049

---------

Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-19 22:08:58 -07:00
Avinash Raj
6efd5fa2b9 Fix for #6431 - chatprompt template with partial variables giing validation error (#6456)
W.r.t recent changes, ChatPromptTemplate does not accepting partial
variables. This PR should fix that issue.


Fixes #6431




#### Who can review?



  @hwchase17

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-19 22:08:15 -07:00
Harrison Chase
02c0a1e77e Harrison/functions in retrieval (#6463) 2023-06-19 22:07:58 -07:00
Swapnil Sharma
dc4ffa8d9b Incorrect argument count handling (#5543)
Throwing ToolException when incorrect arguments are passed to tools so
that that agent can course correct them.

# Incorrect argument count handling

I was facing an error where the agent passed incorrect arguments to
tools. As per the discussions going around, I started throwing
ToolException to allow the model to course correct.

## Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-19 22:06:20 -07:00
kYLe
3a58c4c3a0 Fixed a link typo /-/route -> /-/routes. and change endpoint format (#6186)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes a link typo from `/-/route` to `/-/routes`. 
and change endpoint format
from `f"{self.anyscale_service_url}/{self.anyscale_service_route}"` to
`f"{self.anyscale_service_url}{self.anyscale_service_route}"`
Also adding documentation about the format of the endpoint
#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-19 22:05:54 -07:00
Leonid Ganeline
03b16ed2b1 docs retrievers fixes (#6299)
Fixed several inconsistencies:
- file names and notebook titles should be similar otherwise ToC on the
[retrievers
page](https://python.langchain.com/en/latest/modules/indexes/retrievers.html)
and on the left ToC tab are different. For example, now, `Self-querying
with Chroma` is not correctly alphabetically sorted because its file
named `chroma_self_query.ipynb`
- `Stringing compressors and document transformers...` demoted from `#`
to `##`. Otherwise, it appears in Toc.
- several formatting problems

#### Who can review?

@hwchase17 
@dev2049

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-19 22:04:35 -07:00
M. Tolga Cangöz
bccee85c8f Update introduction.mdx (#6425)
Fix typo
2023-06-19 22:04:09 -07:00
Nir Gazit
95b77a5215 Fix Custom LLM Agent example (#6429)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

The `CustomOutputParser` needs to throw `OutputParserException` when it
fails to parse the response from the agent, so that the executor can
[catch it and
retry](be9371ca8f/langchain/agents/agent.py (L767))
when `handle_parsing_errors=True`.

<!-- Remove if not applicable -->

#### Who can review?

Tag maintainers/contributors who might be interested: @hwchase17

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-19 22:03:58 -07:00
ykerus
b697bbb5b5 Remove backticks without clear purpose from docs (#6442)
#### Description

- Removed two backticks surrounding the phrase "chat messages as"
- This phrase stood out among other formatted words/phrases such as
`prompt`, `role`, `PromptTemplate`, etc., which all seem to have a clear
function.
- `chat messages as`, formatted as such, confused me while reading,
leading me to believe the backticks were misplaced.

#### Who can review?

@hwchase17
<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-19 22:03:38 -07:00
Dhruvil Shah
9494623869 Update web_base.ipynb (#6430)
Minor new line character in the markdown.

Also, this option is not yet in the latest version of LangChain
(0.0.190) from Conda. Maybe in the next update.

@eyurtsev
@hwchase17
2023-06-19 21:43:35 -07:00
Wenchen Li
76ae9da9db Add _similarity_search_with_relevance_scores in Pinecone (#6446)
Just so it is consistent with other `VectorStore` classes.

This is a follow-up of #6056 which also discussed the potential of
adding `similarity_search_by_vector_returning_embeddings` that we will
continue the discussion here.

potentially related: #6286 


#### Who can review?

Tag maintainers/contributors who might be interested: @rlancemartin 

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-19 21:36:40 -07:00
Ismail Pelaseyed
d4e8e0f5ab Add example for question answering over documents with OpenAI Function Agent (#6448)
This PR adds an example of doing question answering over documents using
OpenAI Function Agents.

#### Who can review?

@hwchase17

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-19 21:35:45 -07:00
Andrey Avtomonov
68a675cc68 Remove extra word in the introduction documentation (#6450)
Removed an extra word in the introduction documentation, a simple typo
2023-06-19 21:31:17 -07:00
Ankush Gola
a9246333fd fix anthropic chat model mutating input list (#6457)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes: ChatAnthropic was mutating the input message list during
formatting which isn't ideal bc you could be changing the behavior for
other chat models when using the same input

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:
2023-06-19 21:30:52 -07:00
Zander Chase
bc0af67aaf Add Trajectory Eval RunEvaluator (#6449) 2023-06-19 21:11:50 -07:00
Hakan Tekgul
6a157cf8bb Update arize_callback.py (#6433)
Arize released a new Generative LLM Model Type, adjusting the callback
function to new logging.

Added arize imports, please delete if not necessary.

Specifically, this change makes sure that the prompt and response pairs
from LangChain agents are logged into Arize as a Generative LLM model,
instead of our previous categorical model. In order to do this, the
callback functions collects the necessary data and passes the data into
Arize using Python Pandas SDK.

Arize library, specifically pandas.logger is an additional dependency.

Notebook For Test:
https://docs.arize.com/arize/resources/integrations/langchain

Who can review?
Tag maintainers/contributors who might be interested:

@hwchase17 - project lead

Tracing / Callbacks

@agola11
2023-06-19 18:33:49 -07:00
Zander Chase
00f276d23f Run eval in eval mode (#6447)
For the `run_on_dataset` sessions
2023-06-19 18:31:38 -07:00
Harrison Chase
1300a4bc8c expose docs chains (#6453) 2023-06-19 17:18:54 -07:00
Harrison Chase
286452c7f0 remove mongo 2023-06-19 10:04:14 -07:00
David Duong
be9371ca8f Include placeholder value for all secrets, not just kwargs (#6421)
Mirror PR for https://github.com/hwchase17/langchainjs/pull/1696

Secrets passed via environment variables should be present in the
serialised chain
2023-06-19 15:41:45 +01:00
Harrison Chase
df40cd233f bump version to 205 (#6410) 2023-06-18 23:21:26 -07:00
Harrison Chase
e9c2b280db Harrison/refactor functions (#6408) 2023-06-18 23:13:42 -07:00
Harrison Chase
6a4a950a3c changes to llm chain (#6328)
- return raw and full output (but keep run shortcut method functional)
- change output parser to take in generations (good for working with
messages)
- add output parser to base class, always run (default to same as
current)

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2023-06-18 22:49:47 -07:00
Davis Chase
d3c2eab0b3 Docs nit (#6350) 2023-06-18 20:58:12 -07:00
Davis Chase
af96de6552 fix prod docs build (#6402) 2023-06-18 20:56:12 -07:00
Fei Wang
50556f3b35 support memory for functions (#6165)
#### Before submitting
Add memory support for `OpenAIFunctionsAgent` like
`StructuredChatAgent`.


#### Who can review?
 @hwchase17

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-18 19:00:40 -07:00
Dhruvil Shah
b2b9ded12f Update web_base.py _fetch() method For SiteMapLoader (#6256)
A must-include for SiteMap Loader to avoid the SSL verification error.
Setting the 'verify' to False by ``` sitemap_loader.requests_kwargs =
{"verify": False}``` does not bypass the SSL verification in some
websites.

There are websites (https:// researchadmin.asu.edu/ sitemap.xml) where
setting "verify" to False as shown below would not work:
sitemap_loader.requests_kwargs = {"verify": False} 

We need this merge to tell the Session to use a connector with a
specific argument about SSL:
 \# For SiteMap SSL verification
if not self.request_kwargs['verify']:
    connector = aiohttp.TCPConnector(ssl=False)
else:
    connector = None
 
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

Fixes #5483 

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

@hwchase17 
@eyurtsev

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-18 18:34:18 -07:00
Harrison Chase
10bff4ecc4 Harrison/chroma fix (#6390)
Co-authored-by: Junu Moon(Fran) <francomoon7@gmail.com>
2023-06-18 18:33:26 -07:00
Harrison Chase
5c1fa3e70e Harrison/typesense fix (#6391)
Co-authored-by: Gaurav Chauhan <2796gaurav@gmail.com>
Co-authored-by: gaurav <gaurav.chauhan1@rksv.in>
2023-06-18 18:33:15 -07:00
Harrison Chase
5ccebce777 rm pandas from arize (#6392) 2023-06-18 18:33:04 -07:00
matias-biatoz
3b7c4c51d5 Added gpt-3.5-turbo 0613 16k and 16k-0613 pricing (#6287)
@agola11 

Issue
#6193 

I added the new pricing for the new models.

Also, now gpt-3.5-turbo got split into "input" and "output" pricing. It
currently does not support that.
2023-06-18 18:32:20 -07:00
Ly Nguyen
1e0af59f69 - Fix pass system_message argument in new feature openai_functions_agent (#6297)
can't pass system_message argument, the prompt always show default
message "System: You are a helpful AI assistant."
```
system_message = SystemMessage(
    content="You are an AI that provides information to Human regarding documentation."
)
agent = initialize_agent(
    tools,
    llm=openai_llm_chat,
    agent=AgentType.OPENAI_FUNCTIONS,
    system_message=system_message,
    agent_kwargs={
        "system_message": system_message,
    },
    verbose=False,
)
```

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-18 17:54:00 -07:00
georgian
e64bafed3a Fixes typo in Vectara.similarity_search (#6277)
Fixes a simple typo.

@hwchase17
@dev2049

Co-authored-by: Georgian Sarghi <georgian.sarghi@gmail.com>
2023-06-18 17:48:54 -07:00
Ted
112695e4da Iterate through filtered file types instead of all listed files (#6258)
# Iterate through filtered file types instead of all listed files

Fixes https://github.com/hwchase17/langchain/issues/6257

https://github.com/hwchase17/langchain/pull/4926 originally added the
functionality to filter by file type, storing the filtered files in
`_files`

https://github.com/hwchase17/langchain/pull/5220 removed the
functionality when adding code to filter trashed files by using the
`files` variables instead of the `_files` variable.

This PR simply adds the functionality back by using `_files` again.

#### Who can review?

@hwchase17 - project lead
@eyurtsev
2023-06-18 17:47:58 -07:00
Dhruvil Shah
ba90e3c990 Update web_base.ipynb for guiding purposes (#6248)
To bypass SSL verification errors during fetching, you can include the
`verify=False` parameter. This markdown proves useful, especially for
beginners in the field of web scraping.

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

Fixes #6079 

#### Who can review?

Tag maintainers/contributors who might be interested:
@hwchase17 
@eyurtsev

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-18 17:47:10 -07:00
Dhruvil Shah
92f05a67a4 Add markdown to specify important arguments (#6246)
To bypass SSL verification errors during web scraping, you can include
the ssl_verify=False parameter along with the headers parameter. This
combination of arguments proves useful, especially for beginners in the
field of web scraping.

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

Fixes #1829 

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:
@hwchase17 @eyurtsev 
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-18 17:47:00 -07:00
ikebo
ca7a44d024 add max_context_size property in BaseOpenAI (#6239)
Hi, I make a small improvement for BaseOpenAI.

I added a max_context_size attribute to BaseOpenAI so that we can get
the max context size directly instead of only getting the maximum token
size of the prompt through the max_tokens_for_prompt method.

Who can review?
@hwchase17 @agola11

I followed the [Common
Tasks](c7db9febb0/.github/CONTRIBUTING.md),
the test is all passed.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-18 17:46:35 -07:00
Jan Pawellek
3e3ed8c5c9 Fix LLM types so that they can be loaded from config dicts (#6235)
LLM configurations can be loaded from a Python dict (or JSON file
deserialized as dict) using the
[load_llm_from_config](8e1a7a8646/langchain/llms/loading.py (L12))
function.

However, the type string in the `type_to_cls_dict` lookup dict differs
from the type string defined in some LLM classes. This means that the
LLM object can be saved, but not loaded again, because the type strings
differ.
2023-06-18 17:46:22 -07:00
Shu
46782ad79b Fixed an unhandled error that was raised when DynamoDB did not have any chat history. (#6141)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.



After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

The current version of chat history with DynamoDB doesn't handle the
case correctly when a table has no chat history. This change solves this
error handling.

<!-- Remove if not applicable -->

Fixes https://github.com/hwchase17/langchain/issues/6088

#### Who can review?

Tag maintainers/contributors who might be interested:

@hwchase17

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-18 17:39:19 -07:00
Cameron Vetter
2286204354 Correct AzureSearch Vector Store not applying search_kwargs when searching (#6132)
Fixes #6131 

Simply passes kwargs forward from similarity_search to helper functions
so that search_kwargs are applied to search as originally intended. See
bug for repro steps.

#### Who can review?
  @hwchase17
  @dev2049 

Twitter: poshporcupine
2023-06-18 17:39:06 -07:00
Pierre Dulac
395a2a3724 Fix typo in the CAI critique prompt (#6123)
Very small typo in the Constitutional AI critique default prompt. The
negation "If there is *no* material critique of ..." is used two times,
should be used only on the first one.

Cheers,
Pierre
2023-06-18 17:38:56 -07:00
Hao Chen
38057f0d2e Fix latest clickhouse vector schema change (#6385)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

Fixes https://github.com/hwchase17/langchain/issues/6208

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
 
 VectorStores / Retrievers / Memory
  - @dev2049
2023-06-18 17:34:53 -07:00
Davit Buniatyan
1ab9dc8293 [hotfix] Deep Lake fails on newer version due to hardcode (#6383)
Hot Fixes for Deep Lake [would highly appreciate expedited review]

* deeplake version was hardcoded and since deeplake upgraded the
integration fails with confusing error
* an additional integration test fixed due to embedding function
* Additionally fixed docs for code understanding links after docs
upgraded
* notebook removal of public parameter to make sure code understanding
notebook works

#### Who can review?
  @hwchase17  @dev2049

---------

Co-authored-by: Davit Buniatyan <d@activeloop.ai>
2023-06-18 17:33:49 -07:00
hp0404
6aa7b04f79 Fix integration tests for Faiss vector store (#6281)
Fixes #5807 (issue)

#### Who can review?

Tag maintainers/contributors who might be interested: @dev2049

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-18 17:25:49 -07:00
Chakib Benziane
ddd518a161 searx_search: updated tools and doc (#6276)
- Allows using the  same wrapper to create multiple tools
```python
wrapper = SearxSearchWrapper(searx_host="**")
github_tool = SearxSearchResults(name="Github",
                            wrapper=wrapper,
                            kwargs = {
                                "engines": ["github"],
                                })

arxiv_tool = SearxSearchResults(name="Arxiv",
                            wrapper=wrapper,
                            kwargs = {
                                "engines": ["arxiv"]
                                })
```

- Updated link to searx documentation

  Agents / Tools / Toolkits
  - @hwchase17
2023-06-18 17:23:12 -07:00
ju-bezdek
e2f36ee608 OpenAI functions dont work with async streaming... #6225 (#6226)
Related to this https://github.com/hwchase17/langchain/issues/6225

Just copied the implementation from `generate` function to `agenerate`
and tested it.

Didn't run any official tests thought

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes #6225

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:
  @hwchase17, @agola11

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-18 17:05:16 -07:00
Jan Pawellek
ea6a5b03e0 Fix output final text for HuggingFaceTextGenInference when streaming (#6211)
The LLM integration
[HuggingFaceTextGenInference](https://github.com/hwchase17/langchain/blob/master/langchain/llms/huggingface_text_gen_inference.py)
already has streaming support.

However, when streaming is enabled, it always returns an empty string as
the final output text when the LLM is finished. This is because `text`
is instantiated with an empty string and never updated.

This PR fixes the collection of the final output text by concatenating
new tokens.
2023-06-18 17:01:15 -07:00
Tomaz Bratanic
b3bccabc66 Add option to save/load graph cypher QA (#6219)
Similar as https://github.com/hwchase17/langchain/pull/5818

Added the functionality to save/load Graph Cypher QA Chain due to a user
reporting the following error

> raise NotImplementedError("Saving not supported for this chain
type.")\nNotImplementedError: Saving not supported for this chain
type.\n'
2023-06-18 17:00:27 -07:00
Harrison Chase
495128ba95 Harrison/functions docs improvements (#6389)
Co-authored-by: Sumanth Donthula <46747610+sumanthdonthula@users.noreply.github.com>
2023-06-18 16:57:33 -07:00
Leonid Ganeline
c7ca350cd3 Fix class promotion (#6187)
In LangChain, all module classes are enumerated in the `__init__.py`
file of the correspondent module. But some classes were missed and were
not included in the module `__init__.py`

This PR:
- added the missed classes to the module `__init__.py` files
- `__init__.py:__all_` variable value (a list of the class names) was
sorted
- `langchain.tools.sql_database.tool.QueryCheckerTool` was renamed into
the `QuerySQLCheckerTool` because it conflicted with
`langchain.tools.spark_sql.tool.QueryCheckerTool`
- changes to `pyproject.toml`:
  - added `pgvector` to `pyproject.toml:extended_testing`
- added `pandas` to
`pyproject.toml:[tool.poetry.group.test.dependencies]`
- commented out the `streamlit` from `collbacks/__init__.py`, It is
because now the `streamlit` requires Python >=3.7, !=3.9.7
- fixed duplicate names in `tools`
- fixed correspondent ut-s

#### Who can review?
@hwchase17
@dev2049
2023-06-18 16:55:18 -07:00
Harrison Chase
c0c2fd0782 Harrison/zep mem (#6388)
Co-authored-by: Daniel Chalef <131175+danielchalef@users.noreply.github.com>
2023-06-18 16:53:35 -07:00
Harrison Chase
b7159c15cc Harrison/metaphor search fix (#6387)
Co-authored-by: jeffzwang <jeffreyzhiyuanwang@gmail.com>
2023-06-18 16:53:24 -07:00
Harrison Chase
9bf5b0defa Harrison/myscale self query (#6376)
Co-authored-by: Fangrui Liu <fangruil@moqi.ai>
Co-authored-by: 刘 方瑞 <fangrui.liu@outlook.com>
Co-authored-by: Fangrui.Liu <fangrui.liu@ubc.ca>
2023-06-18 16:53:10 -07:00
Harrison Chase
bd8d418a95 Merge branch 'master' of github.com:hwchase17/langchain 2023-06-18 16:45:49 -07:00
Harrison Chase
3a75d59c3d searx - docs 2023-06-18 16:45:42 -07:00
MIDORIBIN
5be465bd86 Fixed PermissionError on windows (#6170)
Fixed PermissionError that occurred when downloading PDF files via http
in BasePDFLoader on windows.

When downloading PDF files via http in BasePDFLoader, NamedTemporaryFile
is used.
This function cannot open the file again on **Windows**.[Python
Doc](https://docs.python.org/3.9/library/tempfile.html#tempfile.NamedTemporaryFile)

So, we created a **temporary directory** with TemporaryDirectory and
placed the downloaded file there.
temporary directory is deleted in the deconstruct.

Fixes #2698

#### Who can review?

Tag maintainers/contributors who might be interested:

  - @eyurtsev
  - @hwchase17
2023-06-18 16:39:57 -07:00
xleven
4fc7939848 fix link of callbacks on modules page (#6323)
Since
[Callbacks](https://python.langchain.com/docs/modules/callbacks/getting_started/)
on [Modules](https://python.langchain.com/docs/modules/) went to a "Page
Not Found".
2023-06-18 15:08:12 -07:00
Vijay
2b3b4e0f60 Add the ability to run the map_reduce chains process results step as async (#6181)
This will add the ability to add an AsyncCallbackManager (handler) for
the reducer chain, which would be able to stream the tokens via the
`async def on_llm_new_token` callback method



Fixes # (issue)
[5532](https://github.com/hwchase17/langchain/issues/5532)


 @hwchase17  @agola11 
The following code snippet explains how this change would be used to
enable `reduce_llm` with streaming support in a `map_reduce` chain

I have tested this change and it works for the streaming use-case of
reducer responses. I am happy to share more information if this makes
solution sense.

```

AsyncHandler
..........................
class StreamingLLMCallbackHandler(AsyncCallbackHandler):
    """Callback handler for streaming LLM responses."""

    def __init__(self, websocket):
        self.websocket = websocket
    
    # This callback method is to be executed in async
    async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
        resp = ChatResponse(sender="bot", message=token, type="stream")
        await self.websocket.send_json(resp.dict())


Chain
..........
stream_handler = StreamingLLMCallbackHandler(websocket)
stream_manager = AsyncCallbackManager([stream_handler])

streaming_llm = ChatOpenAI(
        streaming=True,
        callback_manager=stream_manager,
        verbose=False,
        temperature=0,
    )
    main_llm = OpenAI(
        temperature=0,
        verbose=False,
    )

    doc_chain = load_qa_chain(
        llm=main_llm,
        reduce_llm=streaming_llm,
        chain_type="map_reduce", 
        callback_manager=manager
    )
    qa_chain = ConversationalRetrievalChain(
        retriever=vectorstore.as_retriever(),
        combine_docs_chain=doc_chain,
        question_generator=question_generator,
        callback_manager=manager,
    )
    
    # Here `acall` will trigger `acombine_docs` on `map_reduce` which should then call `_aprocess_result` which in turn will call `self.combine_document_chain.arun` hence async callback will be awaited
    result = await qa_chain.acall(
         {"question": question, "chat_history": chat_history}
      )
```
2023-06-18 13:19:56 -07:00
Alvaro Bartolome
e0dea577ee Extend ArgillaCallbackHandler support (#6153)
Hi again @agola11! 🤗

## What's in this PR?

After playing around with different chains we noticed that some chains
were using different `output_key`s and we were just handling some, so
we've extended the support to any output, either if it's a Python list
or a string.

Kudos to @dvsrepo for spotting this!

---------

Co-authored-by: Daniel Vila Suero <daniel@argilla.io>
2023-06-18 11:18:33 -07:00
Harrison Chase
a8cb9ee013 Harrison/gdrive enhancements (#6375)
Co-authored-by: Matt Robinson <mrobinson@unstructuredai.io>
2023-06-18 11:07:23 -07:00
rafael
ebfffaa38f Guardrails output parser: Pass LLM api for reasking (#6089)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes https://github.com/ShreyaR/guardrails/issues/155 

Enables guardrails reasking by specifying an LLM api in the output
parser.
2023-06-18 10:50:20 -07:00
Davis Chase
ec850e607f bump 203 (#6372) 2023-06-18 09:20:47 -07:00
Lance Martin
370becdfc2 Add self query retriever example with MD header splitting (#6359)
Flesh out the notebook example for `MarkdownHeaderTextSplitter`
2023-06-17 21:40:20 -07:00
Lance Martin
2c97fbabbd Update MD header text splitter notebook (#6339)
Highlight use case for maintaining header groups when splitting.
2023-06-17 13:19:27 -07:00
Harrison Chase
a2bbe3dda4 Harrison/mmr support for opensearch (#6349)
Co-authored-by: Mehmet Öner Yalçın <oneryalcin@gmail.com>
2023-06-17 12:22:37 -07:00
Davis Chase
2eea5d4cb4 Add ignore vercel preview script (#6320)
skip building preview of docs for anything branch that doesn't start
with `__docs__`. will eventually update to look at code diff directories
but patching for now
2023-06-17 11:17:08 -07:00
Harrison Chase
7a48d9ee82 Merge branch 'master' of github.com:hwchase17/langchain 2023-06-17 11:16:19 -07:00
Kenny
e30fdffd1e Add new openai 0613 model costs (#6110)
Added costs for gpt-4-32k-0613, gpt-4-0613, gpt-3.5-turbo-16k,
gpt-3.5-turbo-0613, and gpt-3.5-turbo-16k-0613 to openai_info callback
based on this [OpenAI
post](https://openai.com/blog/function-calling-and-other-api-updates)

@agola11
2023-06-17 11:11:47 -07:00
Dhruvil Shah
2eec687474 update web_base.py to have verify option (#6107)
We propose an enhancement to the web-based loader initialize method by
introducing a "verify" option. This enhancement addresses the issue of
SSL verification errors encountered on certain web pages. By providing
users with the option to set the verify parameter to False, we offer
greater flexibility and control.
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

### Fixes #6079 

#### Who can review?
@eyurtsev @hwchase17

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-17 11:10:48 -07:00
Harrison Chase
680d6bbbf8 fix titles in documentation 2023-06-17 11:09:11 -07:00
Nuno Campos
e194dc5306 Make lckwargs private (#6344)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-17 19:08:25 +01:00
Harrison Chase
8cfb52ddbb fix spelling 2023-06-17 11:06:54 -07:00
zengbo
5d5298087f Custom Anthropic API URL (#6221)
[Feature] User can custom the Anthropic API URL

#### Who can review?

Tag maintainers/contributors who might be interested:

  Models
  - @hwchase17
  - @agola11
2023-06-17 11:01:29 -07:00
Harrison Chase
61e4a1adf9 Harrison/faiss score (#6341)
Co-authored-by: Frank Stein <16441059+simonfromla@users.noreply.github.com>
Co-authored-by: Sims Juju <sims@Ju.lan>
2023-06-17 11:00:47 -07:00
Harrison Chase
42a28ac1ba Harrison/error zero tools (#6340)
Co-authored-by: Juhee Kim <46583939+juppytt@users.noreply.github.com>
2023-06-17 11:00:35 -07:00
Slawomir Gonet
eef62bf4e9 qdrant: search by vector (#6043)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Added support to `search_by_vector` to Qdrant Vector store.

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->


### Who can review
VectorStores / Retrievers / Memory
- @dev2049
<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17



 -->
2023-06-17 09:44:28 -07:00
Mark
b7ba7e8a7b Allow GoogleDrive to authenticate via application default credentials on Cloud Run/GCE etc without service key (#6035)
@eyurtsev

The existing GoogleDrive implementation always needs a service account
to be available at the credentials location. When running on GCP
services such as Cloud Run, a service account already exists in the
metadata of the service, so no physical key is necessary. This change
adds a check to see if it is running in such an environment, and uses
that authentication instead.

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-17 09:44:17 -07:00
lonestriker
6f36f0f930 Add oobabooga/text-generation-webui support as a llm (#5997)
Add oobabooga/text-generation-webui support as an LLM. Currently,
supports using text-generation-webui's non-streaming API interface.
Allows users who already have text-gen running to use the same models
with langchain.

#### Before submitting

Simple usage, similar to existing LLM supported:

```
from langchain.llms import TextGen
llm = TextGen(model_url = "http://localhost:5000")
```
#### Who can review?

 @hwchase17 - project lead

---------

Co-authored-by: Hien Ngo <Hien.Ngo@adia.ae>
2023-06-17 09:42:15 -07:00
Richy Wang
444ca3f669 Improve AnalyticDB Vector Store implementation without affecting user (#6086)
Hi there:

As I implement the AnalyticDB VectorStore use two table to store the
document before. It seems just use one table is a better way. So this
commit is try to improve AnalyticDB VectorStore implementation without
affecting user behavior:

**1. Streamline the `post_init `behavior by creating a single table with
vector indexing.
2. Update the `add_texts` API for document insertion.
3. Optimize `similarity_search_with_score_by_vector` to retrieve results
directly from the table.
4. Implement `_similarity_search_with_relevance_scores`.
5. Add `embedding_dimension` parameter to support different dimension
embedding functions.**

Users can continue using the API as before. 
Test cases added before is enough to meet this commit.
2023-06-17 09:36:31 -07:00
Ja-sonYun
cdd1d78bf2 make modelname_to_contextsize as a staticmethod (#6040)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes ##6039

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:
@hwchase17 @agola11
<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-17 09:13:08 -07:00
Saba Sturua
427551eabf DocArray as a Retriever (#6031)
## DocArray as a Retriever

[DocArray](https://github.com/docarray/docarray) is an open-source tool
for managing your multi-modal data. It offers flexibility to store and
search through your data using various document index backends. This PR
introduces `DocArrayRetriever` - which works with any available backend
and serves as a retriever for Langchain apps.

Also, I added 2 notebooks:
DocArray Backends - intro to all 5 currently supported backends, how to
initialize, index, and use them as a retriever
DocArray Usage - showcasing what additional search parameters you can
pass to create versatile retrievers

Example:
```python
from docarray.index import InMemoryExactNNIndex
from docarray import BaseDoc, DocList
from docarray.typing import NdArray
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.retrievers import DocArrayRetriever


# define document schema
class MyDoc(BaseDoc):
    description: str
    description_embedding: NdArray[1536]


embeddings = OpenAIEmbeddings()
# create documents
descriptions = ["description 1", "description 2"]
desc_embeddings = embeddings.embed_documents(texts=descriptions)
docs = DocList[MyDoc](
    [
        MyDoc(description=desc, description_embedding=embedding)
        for desc, embedding in zip(descriptions, desc_embeddings)
    ]
)

# initialize document index with data
db = InMemoryExactNNIndex[MyDoc](docs)

# create a retriever
retriever = DocArrayRetriever(
    index=db,
    embeddings=embeddings,
    search_field="description_embedding",
    content_field="description",
)

# find the relevant document
doc = retriever.get_relevant_documents("action movies")
print(doc)
```

#### Who can review?

@dev2049

---------

Signed-off-by: jupyterjazz <saba.sturua@jina.ai>
2023-06-17 09:09:33 -07:00
Masafumi Mori
7bb437146d fix links to prompt templates and example selectors (#6332)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # 
links to prompt templates and example selectors on the
[Prompts](https://python.langchain.com/docs/modules/model_io/prompts/)
page are invalid.

#### Before submitting
Just a small note that I tried to run `make docs_clean` and other
related commands before PR written
[here](https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md#build-documentation-locally),
it gives me an error:
```bash
langchain % make docs_clean
Traceback (most recent call last):
  File "/Users/masafumi/Downloads/langchain/.venv/bin/make", line 5, in <module>
    from scripts.proto import main
ModuleNotFoundError: No module named 'scripts'
make: *** [docs_clean] Error 1
# Poetry (version 1.5.1)
# Python 3.9.13
```
I couldn't figure out how to fix this, so I didn't run those command.
But links should work.

#### Who can review?

Tag maintainers/contributors who might be interested:
@hwchase17

Similar issue #6323

Co-authored-by: masafumimori <m.masafumimori@outlook.com>
2023-06-17 09:07:14 -07:00
Francisco Ingham
83eea230f3 changed height in the nb example (#6327)
changed height in the example to a more reasonable number (from 9 feet
to 6 feet)
2023-06-17 00:05:48 -07:00
James O'Dwyer
0475d015fe Handle Managed Motorhead Data Key (#6169)
# Handle Managed Motorhead Data Key
Managed motorhead will return a payload with a `data` key. we need to
handle this to properly access messages from the server.
2023-06-16 20:36:18 -07:00
Luke Stanley
364f8e7b5d Better Entity Memory code documentation (#6318)
Just adds some comments and docstring improvements.

There was some behaviour that was quite unclear to me at first like:
- "when do things get updated?"
- "why are there only entity names and no summaries?"
- "why do the entity names disappear?" 

Now it can be much more obvious to many.

I am lukestanley on Twitter.
2023-06-16 18:08:44 -07:00
Harrison Chase
af18413d97 Harrison/deeplake new features (#6263)
Co-authored-by: adilkhan <adilkhan.sarsen@nu.edu.kz>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-16 17:53:55 -07:00
Davis Chase
6640293087 fix eval guide links (#6319) 2023-06-16 17:53:46 -07:00
ljeagle
ad324a39ae Improve the performance of add_texts interface and upgrade the AwaDB from 0.3.2 to 0.3.3 (#6316)
1. Changed the implementation of add_texts interface for the AwaDB
vector store in order to improve the performance
2. Upgrade the AwaDB from 0.3.2 to 0.3.3

---------

Co-authored-by: vincent <awadb.vincent@gmail.com>
2023-06-16 16:50:01 -07:00
Davis Chase
24b2af5218 nit (#6305) 2023-06-16 16:21:27 -07:00
Pierre Alexandre SCHEMBRI
9ca11c06b7 Fixes #6282 (#6283)
Fixes #6282 

1 liner to fix default http headers not passed by `LLMRequestsChain`
2023-06-16 16:21:01 -07:00
Davis Chase
23cdebddc4 Del linkcheck readme (#6317) 2023-06-16 16:18:45 -07:00
Brigit Murtaugh
ccd916babe Update dev container (#6189)
Fixes https://github.com/hwchase17/langchain/issues/6172

As described in https://github.com/hwchase17/langchain/issues/6172, I'd
love to help update the dev container in this project.

**Summary of changes:**
- Dev container now builds (the current container in this repo won't
build for me)
- Dockerfile updates
- Update image to our [currently-maintained Python
image](https://github.com/devcontainers/images/tree/main/src/python/.devcontainer)
(`mcr.microsoft.com/devcontainers/python`) rather than the deprecated
image from vscode-dev-containers
- Move Dockerfile to root of repo - in order for `COPY` to work
properly, it needs the files (in this case, `pyproject.toml` and
`poetry.toml`) in the same directory
- devcontainer.json updates
- Removed `customizations` and `remoteUser` since they should be covered
by the updated image in the Dockerfile
     - Update comments
- Update docker-compose.yaml to properly point to updated Dockerfile
- Add a .gitattributes to avoid line ending conversions, which can
result in hundreds of pending changes
([info](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files))
- Add a README in the .devcontainer folder and info on the dev container
in the contributing.md

**Outstanding questions:**
- Is it expected for `poetry install` to take some time? It takes about
30 minutes for this dev container to finish building in a Codespace, but
a user should only have to experience this once. Through some online
investigation, this doesn't seem unusual
- Versions of poetry newer than 1.3.2 failed every time - based on some
of the guidance in contributing.md and other online resources, it seemed
changing poetry versions might be a good solution. 1.3.2 is from Jan
2023

---------

Co-authored-by: bamurtaugh <brmurtau@microsoft.com>
Co-authored-by: Samruddhi Khandale <samruddhikhandale@github.com>
2023-06-16 15:42:14 -07:00
Davis Chase
03b5891cf7 more redirect (#6314) 2023-06-16 14:43:59 -07:00
Davis Chase
eaee492dbc basic redirect (#6309) 2023-06-16 13:39:58 -07:00
Davis Chase
d2243757a3 update readme (#6304) 2023-06-16 12:27:16 -07:00
Davis Chase
2f47e5c766 update api link (#6303) 2023-06-16 12:18:17 -07:00
Davis Chase
d558bcfad8 rm ignore_vercel (#6302) 2023-06-16 12:06:58 -07:00
Davis Chase
87e502c6bc Doc refactor (#6300)
Co-authored-by: jacoblee93 <jacoblee93@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-16 11:52:56 -07:00
Harrison Chase
94c82a189d bump to 202 (#6262) 2023-06-16 06:52:36 -07:00
hp0404
b01cf0dd54 ArxivAPIWrapper - doc_content_chars_max (#6063)
This PR refactors the ArxivAPIWrapper class making
`doc_content_chars_max` parameter optional. Additionally, tests have
been added to ensure the functionality of the doc_content_chars_max
parameter.

Fixes #6027 (issue)
2023-06-15 22:16:42 -07:00
Daniel King
a9b97aa6f4 Update output format of MosaicML endpoint to be more flexible (#6060)
There will likely be another change or two coming over the next couple
weeks as we stabilize the API, but putting this one in now which just
makes the integration a bit more flexible with the response output
format.

```
(langchain) danielking@MML-1B940F4333E2 langchain % pytest tests/integration_tests/llms/test_mosaicml.py tests/integration_tests/embeddings/test_mosaicml.py 
=================================================================================== test session starts ===================================================================================
platform darwin -- Python 3.10.11, pytest-7.3.1, pluggy-1.0.0
rootdir: /Users/danielking/github/langchain
configfile: pyproject.toml
plugins: asyncio-0.20.3, mock-3.10.0, dotenv-0.5.2, cov-4.0.0, anyio-3.6.2
asyncio: mode=strict
collected 12 items                                                                                                                                                                        

tests/integration_tests/llms/test_mosaicml.py ......                                                                                                                                [ 50%]
tests/integration_tests/embeddings/test_mosaicml.py ......                                                                                                                          [100%]

=================================================================================== slowest 5 durations ===================================================================================
4.76s call     tests/integration_tests/llms/test_mosaicml.py::test_retry_logic
4.74s call     tests/integration_tests/llms/test_mosaicml.py::test_mosaicml_llm_call
4.13s call     tests/integration_tests/llms/test_mosaicml.py::test_instruct_prompt
0.91s call     tests/integration_tests/llms/test_mosaicml.py::test_short_retry_does_not_loop
0.66s call     tests/integration_tests/llms/test_mosaicml.py::test_mosaicml_extra_kwargs
=================================================================================== 12 passed in 19.70s ===================================================================================
```

#### Who can review?

  @hwchase17
  @dev2049
2023-06-15 22:15:39 -07:00
JaysonAlbert
50d9c7d5a4 Fix: change the chatgpt plugin retriever metadata format (#5920)
the current implement put the doc itself as the metadata, but the
document chatgpt plugin retriever returned already has a `metadata`
field, it's better to use that instead.

the original code will throw the following exception when using
`RetrievalQAWithSourcesChain`, becuse it can not find the field
`metadata`:

```python
Exception has occurred: ValueError       (note: full exception trace is shown but execution is paused at: _run_module_as_main)
Document prompt requires documents to have metadata variables: ['source']. Received document with missing metadata: ['source'].
  File "/home/wangjie/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 27, in format_document
    raise ValueError(
  File "/home/wangjie/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 65, in <listcomp>
    doc_strings = [format_document(doc, self.document_prompt) for doc in docs]
  File "/home/wangjie/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 65, in _get_inputs
    doc_strings = [format_document(doc, self.document_prompt) for doc in docs]
  File "/home/wangjie/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py", line 85, in combine_docs
    inputs = self._get_inputs(docs, **kwargs)
  File "/home/wangjie/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py", line 84, in _call
    output, extra_return_dict = self.combine_docs(
  File "/home/wangjie/anaconda3/envs/chatglm/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__
    raise e
```

Additionally, the `metadata` filed in the `chatgpt plugin retriever`
have these fileds by default:
```json
{
    "source":  "file",   //email, file or chat
    "source_id": "filename.docx", // the filename
    "url": "", 
    ...
}
```
so, we should set `source_id` to `source` in the langchain metadata.

```python
metadata = d.pop("metadata", d)
if(metadata.get("source_id")):
    metadata["source"] = metadata.pop("source_id")
```

#### Who can review?
@dev2049

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: wangjie <wangjie@htffund.com>
2023-06-15 22:04:45 -07:00
Harrison Chase
e67b26eee9 Harrison/openai functions (#6261)
Co-authored-by: Francisco Ingham <24279597+fpingham@users.noreply.github.com>
2023-06-15 21:54:39 -07:00
Harrison Chase
6aafb46807 Harrison/openai functions (#6223)
Co-authored-by: Francisco Ingham <24279597+fpingham@users.noreply.github.com>
2023-06-15 21:43:33 -07:00
Zander Chase
bc9b8c8239 Improve Error Message for failed callback (#6247)
Include the handler class name in the warning
2023-06-15 19:18:37 -07:00
Alon Roth
0013256e81 Support chat history persistence in AutoGPT (#5716)
**Short Description**
Added a new argument to AutoGPT class which allows to persist the chat
history to a file.

**Changes**
1. Removed the `self.full_message_history: List[BaseMessage] = []`
2. Replaced it with `chat_history_memory` which can take any subclasses
of `BaseChatMessageHistory`

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-15 17:49:03 -07:00
Martin Antos
1913320cbe Feature/add acreom loader (#5780)
adding new loader for [acreom](https://acreom.com) vaults. It's based on
the Obsidian loader with some additional text processing for acreom
specific markdown elements.

 @eyurtsev please take a look!

---------

Co-authored-by: rlm <pexpresss31@gmail.com>
2023-06-15 11:53:00 -07:00
Zander Chase
ae76e473e1 Add Tags for LLMs (#6229)
- [x] Add tracing tags to LLMs + Chat Models (both inheritable and
local)
- [x] Add tags for the run_on_dataset helper function(s)
2023-06-15 11:24:11 -07:00
Harrison Chase
8e1a7a8646 bump version to 201 (#6233) 2023-06-15 08:28:47 -07:00
Harrison Chase
e82687ddf4 Harrison/use functions agent (#6185)
Co-authored-by: Francisco Ingham <24279597+fpingham@users.noreply.github.com>
2023-06-15 08:18:50 -07:00
Ryo Kanazawa
7d2b946d0b Fix typo pandocs to pandoc (#6203)
Fixes https://github.com/hwchase17/langchain/issues/6204

### Context

An typo issue with `pandoc`.

#### Who can review?
@hwchase17
2023-06-15 08:18:27 -07:00
Kyle Roth
c7db9febb0 count tokens for new OpenAI model versions (#6195)
Trying to call `ChatOpenAI.get_num_tokens_from_messages` returns the
following error for the newly announced models `gpt-3.5-turbo-0613` and
`gpt-4-0613`:

```
NotImplementedError: get_num_tokens_from_messages() is not presently implemented for model gpt-3.5-turbo-0613.See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.
```

This adds support for counting tokens for those models, by counting
tokens the same way they're counted for the previous versions of
`gpt-3.5-turbo` and `gpt-4`.

#### reviewers

  - @hwchase17
  - @agola11
2023-06-15 06:16:03 -07:00
xu0o0
7ad13cdbdb feat: add content_format param to ConfluenceLoader.load() (#5922)
Confluence API supports difference format of page content. The storage
format is the raw XML representation for storage. The view format is the
HTML representation for viewing with macros rendered as though it is
viewed by users.

Add the `content_format` parameter to `ConfluenceLoader.load()` to
specify the content format, this is
set to `ContentFormat.STORAGE` by default.

#### Who can review?

Tag maintainers/contributors who might be interested: @eyurtsev

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-14 16:56:28 -07:00
0xJordan
c5a46e7435 feat: Add support for the Solidity language (#6054)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

## Add Solidity programming language support for code splitter.

Twitter: @0xjord4n_

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->
#### Who can review?

Tag maintainers/contributors who might be interested:
@hwchase17

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-14 14:25:02 -07:00
Nuno Campos
17c4ec4812 Add docs for tags (#6155)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @hwchase17

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-14 14:01:58 -07:00
thiswillbeyourgithub
4a649e3b14 typo: 'following following' to 'following' (#6163)
Co-authored-by: thiswillbeyourgithub <github@32mail.33mail.com>
2023-06-14 10:58:47 -07:00
Maciej Bryński
8a44c879c6 Update readthedocs_documentation.ipynb (#6148)
Minor fix in documentation. 
Change URL in wget call to proper one.
2023-06-14 07:21:48 -07:00
Zander Chase
e0e3ef1c57 Update Name (#6136) 2023-06-13 22:25:36 -07:00
Zander Chase
4555ad5d1f Add Run Collector Callback (#6133)
Add a callback handler that can collect nested run objects. Useful for
evaluation.
2023-06-13 22:17:37 -07:00
Harrison Chase
6ac120f299 bump ver to 200 (#6130) 2023-06-13 19:33:51 -07:00
Harrison Chase
e41f0b341c add functions agent (#6113) 2023-06-13 18:51:01 -07:00
Zander Chase
b3b155d488 Return session name in runner response (#6112)
Makes it easier to then run evals w/o thinking about specifying a
session
2023-06-13 16:59:43 -07:00
Harrison Chase
e74733ab9e support streaming for functions (#6115) 2023-06-13 15:26:26 -07:00
Nuno Campos
11ab0be11a Add support for tags (#5898)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-13 12:30:59 -07:00
Harrison Chase
1281fdf0f2 Harrison/notebook functions (#6103) 2023-06-13 10:52:54 -07:00
Harrison Chase
34ebb29726 bump version to 199 (#6102) 2023-06-13 10:50:33 -07:00
Wenchen Li
f9edf76e7c Implement max_marginal_relevance_search in VectorStore of Pinecone (#6056)
This adds implementation of MMR search in pinecone; and I have two
semi-related observations about this vector store class:
- Maybe we should also have a
`similarity_search_by_vector_returning_embeddings` like in supabase, but
it's not in the base `VectorStore` class so I didn't implement
- Talking about the base class, there's
`similarity_search_with_relevance_scores`, but in pinecone it is called
`similarity_search_with_score`; maybe we should consider renaming it to
align with other `VectorStore` base and sub classes (or add that as an
alias for backward compatibility)

#### Who can review?

Tag maintainers/contributors who might be interested:
 - VectorStores / Retrievers / Memory - @dev2049
2023-06-13 10:46:45 -07:00
Harrison Chase
970b2f9d38 convert tools to openai (#6100) 2023-06-13 10:40:49 -07:00
Harrison Chase
292accde2b support functions (#6099) 2023-06-13 10:32:58 -07:00
Lance Martin
ee3d0513ad Add tests and update notebook for MarkdownHeaderTextSplitter (#6069)
Add test and update notebook for `MarkdownHeaderTextSplitter`.
2023-06-13 09:07:52 -07:00
Keshav Kumar
8fdf88b8e3 Fix for ModuleNotFoundError while running langchain-server. Issue #5833 (#6077)
This PR fixes the error
`ModuleNotFoundError: No module named 'langchain.cli'`
Fixes https://github.com/hwchase17/langchain/issues/5833 (issue)
2023-06-13 08:37:07 -07:00
Zander Chase
0c52275bdb Use Run object from SDK (#6067)
Update the Run object in the tracer to extend that in the SDK to include
the parameters necessary for tracking/tracing
2023-06-13 07:14:11 -07:00
Harrison Chase
cde1e8739a turn off repr (#6078) 2023-06-12 22:45:24 -07:00
Nuno Campos
a9b3b2e327 Enable serialization for anthropic (#6049) 2023-06-12 22:39:10 -07:00
Harrison Chase
6ac5d80286 propogate kwargs fully (#6076) 2023-06-12 22:37:55 -07:00
Harrison Chase
ec1a2adf9c improve tools (#6062) 2023-06-12 22:19:03 -07:00
Julius Lipp
5b6bbf4ab2 Add embaas document extraction api endpoints (#6048)
# Introduces embaas document extraction api endpoints

In this PR, we add support for embaas document extraction endpoints to
Text Embedding Models (with LLMs, in different PRs coming). We currently
offer the MTEB leaderboard top performers, will continue to add top
embedding models and soon add support for customers to deploy thier own
models. Additional Documentation + Infomation can be found
[here](https://embaas.io).

While developing this integration, I closely followed the patterns
established by other langchain integrations. Nonetheless, if there are
any aspects that require adjustments or if there's a better way to
present a new integration, let me know! :)

Additionally, I fixed some docs in the embeddings integration.

Related PR: #5976 

#### Who can review?
  DataLoaders
  - @eyurtsev
2023-06-12 19:13:52 -07:00
Zander Chase
2f0088039d Log tracer errors (#6066)
Example (would log several times if not for the helper fn. Would emit no
logs due to mulithreading previously)

![image](https://github.com/hwchase17/langchain/assets/130414180/070d25ae-1f06-4487-9617-0a6f66f3f01e)
2023-06-12 17:13:49 -07:00
Lance Martin
b023f0c0f2 Text splitter for Markdown files by header (#5860)
This creates a new kind of text splitter for markdown files.

The user can supply a set of headers that they want to split the file
on.

We define a new text splitter class, `MarkdownHeaderTextSplitter`, that
does a few things:

(1) For each line, it determines the associated set of user-specified
headers
(2) It groups lines with common headers into splits

See notebook for example usage and test cases.
2023-06-12 15:46:42 -07:00
Jens Madsen
2c91f0d750 chore: spedd up integration test by using smaller model (#6044)
Adds a new parameter `relative_chunk_overlap` for the
`SentenceTransformersTokenTextSplitter` constructor. The parameter sets
the chunk overlap using a relative factor, e.g. for a model where the
token limit is 100, a `relative_chunk_overlap=0.5` implies that
`chunk_overlap=50`

Tag maintainers/contributors who might be interested:

 @hwchase17, @dev2049
2023-06-12 13:27:10 -07:00
Harrison Chase
5922742d56 comment out 2023-06-12 10:57:31 -07:00
Harrison Chase
681ba6d520 embaas title 2023-06-12 08:00:14 -07:00
Ben Flast
7a5e36f3f5 Mongo db doc fix (#6042)
I missed a few errors in my initial fix @hwchase1.  Thanks!
2023-06-12 07:29:27 -07:00
Harrison Chase
289e9aeb9d bump ver to 198 (#6026) 2023-06-11 21:32:45 -07:00
Harrison Chase
d1561b74eb Harrison/cognitive search (#6011)
Co-authored-by: Fabrizio Ruocco <ruoccofabrizio@gmail.com>
2023-06-11 21:15:42 -07:00
wenmeng zhou
bb7ac9edb5 add dashscope text embedding (#5929)
#### What I do
Adding embedding api for
[DashScope](https://help.aliyun.com/product/610100.html), which is the
DAMO Academy's multilingual text unified vector model based on the LLM
base. It caters to multiple mainstream languages worldwide and offers
high-quality vector services, helping developers quickly transform text
data into high-quality vector data. Currently supported languages
include Chinese, English, Spanish, French, Portuguese, Indonesian, and
more.

#### Who can review?

  Models
  - @hwchase17
  - @agola11

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-11 21:14:20 -07:00
Ben Flast
010d0bfeea Update MongoDB Atlas support docs (#6022)
Updating MongoDB Atlas support docs @hwchase17 let me know if you have
any questions
2023-06-11 20:57:15 -07:00
Harrison Chase
e05997c25e Harrison/hologres (#6012)
Co-authored-by: Changgeng Zhao <changgeng@nyu.edu>
Co-authored-by: Changgeng Zhao <zhaochanggeng.zcg@alibaba-inc.com>
2023-06-11 20:56:51 -07:00
ljeagle
c5bce4a465 add from_documents interface in awadb vector store (#6023)
added new interface from_documents in awadb vector store
  @dev2049

---------

Co-authored-by: vincent <awadb.vincent@gmail.com>
2023-06-11 19:35:03 -07:00
Zander Chase
2c9619bc1d Remove from PR template (#6018) 2023-06-11 19:34:26 -07:00
ju-bezdek
18f5c985d9 Langchain decorators (#6017)
Added description of LangChain Decorators  into the integration section

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->


#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

@hwchase17 

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-11 19:32:24 -07:00
Zander Chase
a197acfcd3 Update check (#6020)
We were assigning the name as None in on_chat_model_start then not
updating, resulting in a validation error.
2023-06-11 17:59:09 -07:00
Nuno Campos
18af149e91 nc/load (#5733)
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-11 15:51:28 -07:00
Zander Chase
614cff89bc I before E (#6015) 2023-06-11 15:45:12 -07:00
Harrison Chase
a7227ee01b Harrison/embaas (#6010)
Co-authored-by: Julius Lipp <43986145+juliuslipp@users.noreply.github.com>
2023-06-11 13:35:14 -07:00
xu0o0
232faba796 fix: TypeError when loading confluence pages by cql (#5878)
The Confluence loader uses the wrong API (`Confluence.cql()` provided by
`atlassian-python-api`) to load pages by CQL.
`Confluence.cql()` is a wrapper of the `/rest/api/search` API which
searches for entities in Confluence.

To search for pages in Confluence, the loader can use the
`/rest/api/content/search` API.

#### Who can review?

Tag maintainers/contributors who might be interested: @eyurtsev
<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
#### References
##### Cloud API

https://developer.atlassian.com/cloud/confluence/rest/v1/api-group-content/#api-wiki-rest-api-content-search-get

https://developer.atlassian.com/cloud/confluence/rest/v1/api-group-search/#api-wiki-rest-api-search-get

##### Server API

https://docs.atlassian.com/ConfluenceServer/rest/8.3.1/#api/content-search
https://docs.atlassian.com/ConfluenceServer/rest/8.3.1/#api/search
2023-06-11 13:23:22 -07:00
Akhil Vempali
d7d629911b feat: Added filtering option to FAISS vectorstore (#5966)
Inspired by the filtering capability available in ChromaDB, added the
same functionality to the FAISS vectorestore as well. Since FAISS does
not have an inbuilt method of filtering used the approach suggested in
this [thread](https://github.com/facebookresearch/faiss/issues/1079)
Langchain Issue inspiration:
https://github.com/hwchase17/langchain/issues/4572

- [x] Added filtering capability to semantic similarly and MMR
- [x] Added test cases for filtering in
`tests/integration_tests/vectorstores/test_faiss.py`

#### Who can review?

Tag maintainers/contributors who might be interested:

  VectorStores / Retrievers / Memory
  - @dev2049
  - @hwchase17
2023-06-11 13:20:03 -07:00
Jiaping(JP) Zhang
6e90406e0f [APIChain] enhance the robustness or url (#6008)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

I used the APIChain sometimes it failed during the intermediate step
when generating the api url and calling the `request` function. After
some digging, I found the url sometimes includes the space at the
beginning, like `%20https://...api.com` which causes the `
self.requests_wrapper.get` internal function to fail.

Including a little string preprocessing `.strip` to remove the space
seems to improve the robustness of the APIchain to make sure it can send
the request and retrieve the API result more reliably.

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?
@vowelparrot
Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-11 13:13:57 -07:00
Ikko Eltociear Ashimine
c868a3eef3 Update databricks.md (#6006)
HuggingFace -> Hugging Face


#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?
2023-06-11 13:13:33 -07:00
Harrison Chase
20e9ce8a62 bump version to 197 (#6007) 2023-06-11 10:14:57 -07:00
Harrison Chase
704d56e241 support kwargs (#5990) 2023-06-11 10:09:22 -07:00
Mark Pors
b934677a81 Obey handler.raise_error in _ahandle_event_for_handler (#6001)
Obey `handler.raise_error` in `_ahandle_event_for_handler`

Exceptions for async callbacks were only logged as warnings, also when
`raise_error = True`

#### Who can review?

  @hwchase17

   @agola11
2023-06-11 09:49:26 -07:00
Harrison Chase
2d038b57b2 Harrison/arxiv fix (#5993)
Co-authored-by: Juanjo do Olmo <87780148+SimplyJuanjo@users.noreply.github.com>
2023-06-11 09:48:09 -07:00
Vincent
0b740c9baa add ocr_languages param for ConfluenceLoader.load() (#5823)
@eyurtsev

当Confluence文档内容中包含附件,且附件内容为非英文时,提取出来的文本是乱码的。
When the content of the document contains attachments, and the content
of the attachments is not in English, the extracted text is garbled.

这主要是因为没有为pytesseract传递lang参数,默认情况下只支持英文。
This is mainly because lang parameter is not passed to pytesseract, and
only English is supported by default.

所以我给ConfluenceLoader.load()添加了ocr_languages参数,以便支持多种语言。
So I added the ocr_languages parameter to ConfluenceLoader.load () to
support multiple languages.
2023-06-10 16:51:04 -07:00
Thomas B
ac3e6e3944 Fix IndexError in RecursiveCharacterTextSplitter (#5902)
Fixes (not reported) an error that may occur in some cases in the
RecursiveCharacterTextSplitter.

An empty `new_separators` array ([]) would end up in the else path of
the condition below and used in a function where it is expected to be
non empty.

```python
if new_separators is None:
    ...
else:
   # _split_text() expects this array to be non-empty!
   other_info = self._split_text(s, new_separators)

```
resulting in an `IndexError`

```python
def _split_text(self, text: str, separators: List[str]) -> List[str]:
        """Split incoming text and return chunks."""
        final_chunks = []
        # Get appropriate separator to use
>       separator = separators[-1]
E       IndexError: list index out of range

langchain/text_splitter.py:425: IndexError
```

#### Who can review?
@hwchase17 @eyurtsev

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-10 16:48:53 -07:00
Satheesh Valluru
d2270a2261 Fix: Grammer fix in documentation (#5925)
Fix for grammatical errors in the documentation of `vectorstore`.  
@vowelparrot
2023-06-10 16:43:36 -07:00
Jens Madsen
1250cd4630 fix: use model token limit not tokenizer ditto (#5939)
This fixes a token limit bug in the
SentenceTransformersTokenTextSplitter. Before the token limit was taken
from tokenizer used by the model. However, for some models the token
limit of the tokenizer (from `AutoTokenizer.from_pretrained`) does not
equal the token limit of the model. This was a false assumption.
Therefore, the token limit of the text splitter is now taken from the
sentence transformers model token limit.

Twitter: @plasmajens

#### Before submitting

#### Who can review?

@hwchase17 and/or @dev2049

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-10 16:36:03 -07:00
Ofer Mendelevitch
f8cf09a230 Update to Vectara integration (#5950)
This PR updates the Vectara integration (@hwchase17 ):
* Adds reuse of requests.session to imrpove efficiency and speed.
* Utilizes Vectara's low-level API (instead of standard API) to better
match user's specific chunking with LangChain
* Now add_texts puts all the texts into a single Vectara document so
indexing is much faster.
* updated variables names from alpha to lambda_val (to be consistent
with Vectara docs) and added n_context_sentence so it's available to use
if needed.
* Updates to documentation and tests

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-10 16:27:01 -07:00
qued
e4224a396b feat: Add UnstructuredXMLLoader for .xml files (#5955)
# Unstructured XML Loader
Adds an `UnstructuredXMLLoader` class for .xml files. Works with
unstructured>=0.6.7. A plain text representation of the text with the
XML tags will be available under the `page_content` attribute in the
doc.

### Testing
```python
from langchain.document_loaders import UnstructuredXMLLoader

loader = UnstructuredXMLLoader(
    "example_data/factbook.xml",
)
docs = loader.load()
```


## Who can review?

@hwchase17 
@eyurtsev
2023-06-10 16:24:42 -07:00
Lance Martin
21bd16bb59 Create Airtable loader (#5958)
Create document loader for Airtable
2023-06-10 15:43:18 -07:00
Harrison Chase
9218684759 Add a new vector store - AwaDB (#5971) (#5992)
Added AwaDB vector store, which is a wrapper over the AwaDB, that can be
used as a vector storage and has an efficient similarity search. Added
integration tests for the vector store
Added jupyter notebook with the example

Delete a unneeded empty file and resolve the
conflict(https://github.com/hwchase17/langchain/pull/5886)

Please check, Thanks!

@dev2049
@hwchase17

---------

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: ljeagle <vincent_jieli@yeah.net>
Co-authored-by: vincent <awadb.vincent@gmail.com>
2023-06-10 15:42:32 -07:00
Tomaz Bratanic
d5819a7ca7 Add additional parameters to Graph Cypher Chain (#5979)
Based on the inspiration from the SQL chain, the following three
parameters are added to Graph Cypher Chain.

- top_k: Limited the number of results from the database to be used as
context
- return_direct: Return database results without transforming them to
natural language
- return_intermediate_steps: Return intermediate steps
2023-06-10 14:39:55 -07:00
Daniel Grittner
0ca37e613c Fix handling of missing action & input for async MRKL agent (#5985)
Hi,

This is a fix for https://github.com/hwchase17/langchain/pull/5014. This
PR forgot to add the ability to self solve the ValueError(f"Could not
parse LLM output: {llm_output}") error for `_atake_next_step`.
2023-06-10 14:38:20 -07:00
Harrison Chase
ca1afa7213 add test for structured tools (#5989) 2023-06-10 14:37:26 -07:00
constDave
5f356b9993 Fixed typo missing "use" (#5991)
<!--
Fixed a simple typo on
https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/vectorstore.html
where the word "use" was missing.

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-10 14:31:58 -07:00
Kaarthik Andavar
d6f5d0c6b1 Fix: SnowflakeLoader returning empty documents (#5967)
**Fix SnowflakeLoader's Behavior of Returning Empty Documents**

**Description:**

This PR addresses the issue where the SnowflakeLoader was consistently
returning empty documents. After investigation, it was found that the
query method within the SnowflakeLoader was not properly fetching and
processing the data.

**Changes:**

1. Modified the query method in SnowflakeLoader to handle data fetch and
processing more accurately.
2. Enhanced error handling within the SnowflakeLoader to catch and log
potential issues that may arise during data loading.

**Impact:**

This fix will ensure the SnowflakeLoader reliably returns the expected
documents instead of empty ones, improving the efficiency and
reliability of data processing tasks in the LangChain project.

Before Fix:

`[
    Document(page_content='', metadata={}),
    Document(page_content='', metadata={}),
    Document(page_content='', metadata={}),
    Document(page_content='', metadata={}),
    Document(page_content='', metadata={}),
    Document(page_content='', metadata={}),
    Document(page_content='', metadata={}),
    Document(page_content='', metadata={}),
    Document(page_content='', metadata={}),
    Document(page_content='', metadata={})
]`

After Fix:

`[Document(page_content='CUSTOMER_ID: 1\nFIRST_NAME: John\nLAST_NAME:
Doe\nEMAIL: john.doe@example.com\nPHONE: 555-123-4567\nADDRESS: 123 Elm
St, San Francisco, CA 94102', metadata={}),
Document(page_content='CUSTOMER_ID: 2\nFIRST_NAME: Jane\nLAST_NAME:
Doe\nEMAIL: jane.doe@example.com\nPHONE: 555-987-6543\nADDRESS: 456 Oak
St, San Francisco, CA 94103', metadata={}),
Document(page_content='CUSTOMER_ID: 3\nFIRST_NAME: Michael\nLAST_NAME:
Smith\nEMAIL: michael.smith@example.com\nPHONE: 555-234-5678\nADDRESS:
789 Pine St, San Francisco, CA 94104', metadata={}),
Document(page_content='CUSTOMER_ID: 4\nFIRST_NAME: Emily\nLAST_NAME:
Johnson\nEMAIL: emily.johnson@example.com\nPHONE: 555-345-6789\nADDRESS:
321 Maple St, San Francisco, CA 94105', metadata={}),
Document(page_content='CUSTOMER_ID: 5\nFIRST_NAME: David\nLAST_NAME:
Williams\nEMAIL: david.williams@example.com\nPHONE:
555-456-7890\nADDRESS: 654 Birch St, San Francisco, CA 94106',
metadata={}), Document(page_content='CUSTOMER_ID: 6\nFIRST_NAME:
Emma\nLAST_NAME: Jones\nEMAIL: emma.jones@example.com\nPHONE:
555-567-8901\nADDRESS: 987 Cedar St, San Francisco, CA 94107',
metadata={}), Document(page_content='CUSTOMER_ID: 7\nFIRST_NAME:
Oliver\nLAST_NAME: Brown\nEMAIL: oliver.brown@example.com\nPHONE:
555-678-9012\nADDRESS: 147 Cherry St, San Francisco, CA 94108',
metadata={}), Document(page_content='CUSTOMER_ID: 8\nFIRST_NAME:
Sophia\nLAST_NAME: Davis\nEMAIL: sophia.davis@example.com\nPHONE:
555-789-0123\nADDRESS: 369 Walnut St, San Francisco, CA 94109',
metadata={}), Document(page_content='CUSTOMER_ID: 9\nFIRST_NAME:
James\nLAST_NAME: Taylor\nEMAIL: james.taylor@example.com\nPHONE:
555-890-1234\nADDRESS: 258 Hawthorn St, San Francisco, CA 94110',
metadata={}), Document(page_content='CUSTOMER_ID: 10\nFIRST_NAME:
Isabella\nLAST_NAME: Wilson\nEMAIL: isabella.wilson@example.com\nPHONE:
555-901-2345\nADDRESS: 963 Aspen St, San Francisco, CA 94111',
metadata={})]
`

**Tests:**

All unit and integration tests have been run and passed successfully.
Additional tests were added to validate the new behavior of the
SnowflakeLoader.

**Checklist:**

- [x] Code changes are covered by tests
- [x] Code passes `make format` and `make lint`
- [x] This PR does not introduce any breaking changes

Please review and let me know if any changes are required.
2023-06-10 13:03:50 -07:00
Harrison Chase
62ec10a7f5 bump version to 196 (#5988) 2023-06-10 09:06:35 -07:00
German Martin
736a1819aa LOTR: Lord of the Retrievers. A retriever that merge several retrievers together applying document_formatters to them. (#5798)
"One Retriever to merge them all, One Retriever to expose them, One
Retriever to bring them all and in and process them with Document
formatters."

Hi @dev2049! Here bothering people again!

I'm using this simple idea to deal with merging the output of several
retrievers into one.
I'm aware of DocumentCompressorPipeline and
ContextualCompressionRetriever but I don't think they allow us to do
something like this. Also I was getting in trouble to get the pipeline
working too. Please correct me if i'm wrong.

This allow to do some sort of "retrieval" preprocessing and then using
the retrieval with the curated results anywhere you could use a
retriever.
My use case is to generate diff indexes with diff embeddings and sources
for a more colorful results then filtering them with one or many
document formatters.

I saw some people looking for something like this, here:
https://github.com/hwchase17/langchain/issues/3991
and something similar here:
https://github.com/hwchase17/langchain/issues/5555

This is just a proposal I know I'm missing tests , etc. If you think
this is a worth it idea I can work on tests and anything you want to
change.
Let me know!

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-10 08:41:02 -07:00
Lance Martin
f3e7ac0a2c Add load() to snowflake loader (#5956)
Quick fix for recently added [snowflake data
loader](https://github.com/hwchase17/langchain/pull/5825/files).
2023-06-09 11:27:29 -07:00
Harrison Chase
3678cba0be bump ver to 195 (#5949) 2023-06-09 09:17:08 -07:00
Harrison Chase
7af186fddf fixes to docs (#5919) 2023-06-09 09:15:53 -07:00
Kacper Łukawski
7cc200766e Expose full params in Qdrant (#5947)
# Expose full params in Qdrant

There were many questions regarding supporting some additional
parameters in Qdrant integration. Qdrant supports many vector search
optimizations that were impossible to use directly in Qdrant before.
That includes:

1. Possibility to manipulate collection params while using
`Qdrant.from_texts`. The PR allows setting things such as quantization,
HNWS config, optimizers config, etc. That makes it consistent with raw
`QdrantClient`.
2. Extended options while searching. It includes HNSW options, exact
search, score threshold filtering, and read consistency in distributed
mode.

After merging that PR, #4858 might also be closed.

## Who can review?

VectorStores / Retrievers / Memory

@dev2049 @hwchase17
2023-06-09 08:56:32 -07:00
Rubén Martínez
db7ef635c0 Add support for the endpoint URL in DynamoDBChatMesasgeHistory (#5836)
This PR adds the possibility of specifying the endpoint URL to AWS in
the DynamoDBChatMessageHistory, so that it is possible to target not
only the AWS cloud services, but also a local installation.

Specifying the endpoint URL, which is normally not done when addressing
the cloud services, is very helpful when targeting a local instance
(like [Localstack](https://localstack.cloud/)) when running local tests.

Fixes #5835

#### Who can review?

Tag maintainers/contributors who might be interested: @dev2049

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-08 23:21:11 -07:00
Lior
0eb1bc1a02 Fix the issue where the parameters passed to VertexAI ignored #5889 (#5891)
Fixes #5889 and fixes the name of the argument in init_vertexai
@hwchase17
@agola11

Co-authored-by: Lior Durahly <lior.durahly@superwise.ai>
2023-06-08 23:15:22 -07:00
Fei Wang
63fcf41bea Fix openai proxy error (#5914)
Fixes proxy error.
Since openai does not parse proxy parameters and uses openai.proxy
directly, the proxy method needs to be modified.


7610c5adfa/openai/api_requestor.py (LL90)

#### Who can review?
  @hwchase17 - project lead

  Models
  - @hwchase17
  - @agola11

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-08 23:15:06 -07:00
felpigeon
2791a753bf Add start index to metadata in TextSplitter (#5912)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

#### Add start index to metadata in TextSplitter

- Modified method `create_documents` to track start position of each
chunk
- The `start_index` is included in the metadata if the `add_start_index`
parameter in the class constructor is set to `True`

This enables referencing back to the original document, particularly
useful when a specific chunk is retrieved.

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:
@eyurtsev @agola11
<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-08 23:09:32 -07:00
Philip Kiely - Baseten
a09a0e3511 Baseten integration (#5862)
This PR adds a Baseten integration. I've done my best to follow the
contributor's guidelines and add docs, an example notebook, and an
integration test modeled after similar integrations' test.

Please let me know if there is anything I can do to improve the PR. When
it is merged, please tag https://twitter.com/basetenco and
https://twitter.com/philip_kiely as contributors (the note on the PR
template said to include Twitter accounts)
2023-06-08 23:05:57 -07:00
Tamara Lazarevic
0ce8745928 Fix typo (#5894) 2023-06-08 23:05:22 -07:00
Andrew Grangaard
d8ae925425 arxiv: Correct name of search client attribute to 'arxiv_search' from incorrect 'arxiv_client' (#5917)
+ this private attribute is referenced as `arxiv_search` in internal
usage and is set when verifying the environment

twitter: @spazm 


#### Who can review?

Any of @hwchase17, @leo-gan, or @bongsang might be interested in
reviewing.

+ Mismatch between `arxiv_client` attribute vs `arxiv_search` in
validation and usage is present in the initial commit by @hwchase17.
+ @leo-gan has made most of the edits.
+ @bongsang implemented pdf download.
2023-06-08 22:49:11 -07:00
sergiolrinditex
fe8bbc2da7 Create snowflake Loader (#5825)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: rlm <pexpresss31@gmail.com>
2023-06-08 22:03:00 -07:00
Zander Chase
77c286cf02 Use LCP Client in Tracer (#5908)
Move the LCP calls to the client.
2023-06-08 21:15:14 -07:00
Frank Hübner
3ec6400d70 Feature/add AWS Kendra Index Retriever (#5856)
adding a new retriever for AWS Kendra

@dev2049 please take a look!
2023-06-08 15:44:09 -07:00
Piyush Jain
a6ebffb695 Fixes model arguments for amazon models (#5896)
Fixes #5713 
#### Who can review?

Tag maintainers/contributors who might be interested:
@hwchase17
@agola11
@aarora79
@rsgrewal-aws
2023-06-08 14:16:01 -07:00
小铭
767fa91eae Fix the shortcut conflict for document page search (#5874)
Fix the document page to open both search and Mendable when pressing
Ctrl+K.
I have changed the shortcut for Mendable to Ctrl+J.



<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?
  @hwchase17
Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-08 14:15:19 -07:00
Zander Chase
5f74db4500 Update run eval imports in init (#5858) 2023-06-08 10:44:36 -07:00
warjiang
511c12dd39 fix: update qa_chain doc for "chai_type" (#5877)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->
`load_qa_with_sources_chain` method already support four type of chain,
including `map_rerank`. update document to prevent any misunderstandings
😀.

![image](https://github.com/hwchase17/langchain/assets/6478745/325260b2-6121-4900-aef9-001febff811a)

<!-- Remove if not applicable -->

Fixes # (issue)
No, just update document.

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?
@hwchase17 
Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-08 07:32:51 -07:00
Harrison Chase
893d20f735 bump version to 194 (#5866) 2023-06-07 22:47:48 -07:00
Harrison Chase
35cfd25db3 Harrison/nebula graph (#5865)
Co-authored-by: Wey Gu <weyl.gu@gmail.com>
Co-authored-by: chenweisomebody <chenweisomebody@gmail.com>
2023-06-07 21:56:43 -07:00
Harrison Chase
658f8bdee7 Harrison/fauna loader (#5864)
Co-authored-by: Shadid12 <Shadid12@users.noreply.github.com>
2023-06-07 21:32:23 -07:00
Liang Zhang
5518f24ec3 Implement saving and loading of RetrievalQA chain (#5818)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes #3983
Mimicing what we do for saving and loading VectorDBQA chain, I added the
logic for RetrievalQA chain.
Also added a unit test. I did not find how we test other chains for
their saving and loading functionality, so I just added a file with one
test case. Let me know if there are recommended ways to test it.

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:
@dev2049
<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-07 21:07:13 -07:00
Liang Zhang
b93638ef1e Refactor and update databricks integration page (#5575)
# Your PR Title (What it does)

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-07 20:45:47 -07:00
volodymyr-memsql
a1549901ce Added SingleStoreDB Vector Store (#5619)
- Added `SingleStoreDB` vector store, which is a wrapper over the
SingleStore DB database, that can be used as a vector storage and has an
efficient similarity search.
- Added integration tests for the vector store
- Added jupyter notebook with the example

@dev2049

---------

Co-authored-by: Volodymyr Tkachuk <vtkachuk-ua@singlestore.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-07 20:45:33 -07:00
jjzhuo
78aa59c68b Fix serialization issue with W&B (#5693)
The chain input_documents are not displaying properly in W&B, due to
serialization issue:

<img width="1164" alt="Screenshot 2023-06-04 at 11 58 26 AM"
src="https://github.com/hwchase17/langchain/assets/134809928/f31f14f6-0935-4cca-9913-6760cd40eadf">

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-07 20:44:59 -07:00
Alec Flett
ec0dd6e34a propagate callbacks to ConversationalRetrievalChain (#5572)
# Allow callbacks to monitor ConversationalRetrievalChain

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

I ran into an issue where load_qa_chain was not passing the callbacks
down to the child LLM chains, and so made sure that callbacks are
propagated. There are probably more improvements to do here but this
seemed like a good place to stop.

Note that I saw a lot of references to callbacks_manager, which seems to
be deprecated. I left that code alone for now.



## Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
@agola11
<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-07 20:25:21 -07:00
Jeff Vestal
3294774148 Add knn and query search field options to ElasticKnnSearch (#5641)
in the `ElasticKnnSearch` class added 2 arguments that were not exposed
properly

`knn_search` added:
- `vector_query_field: Optional[str] = 'vector'`
-- vector_query_field: Field name to use in knn search if not default
'vector'

`knn_hybrid_search` added:
- `vector_query_field: Optional[str] = 'vector'`
-- vector_query_field: Field name to use in knn search if not default
'vector'
- `query_field: Optional[str] = 'text'`
-- query_field: Field name to use in search if not default 'text'



Fixes # https://github.com/hwchase17/langchain/issues/5633


cc: @dev2049 @hwchase17

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-07 20:19:14 -07:00
Mark Marryatt
cef79ca579 Fix exporting GCP Vertex Matching Engine from vectorstores (#5793)
The Vertex Matching Engine docs include [the
line](b177a29d3f/docs/modules/indexes/vectorstores/examples/matchingengine.ipynb (L32))
`from langchain.vectorstores import MatchingEngine` which doesn't work
as it wasn't added to the vectorestores module exports.



  - @dev2049
2023-06-07 19:45:33 -07:00
Dave Ingram
106364a45c Update to Getting Started docs page for Memory (#5855)
Simply fixing a small typo in the memory page. 

Also removed an extra code block at the end of the file.

Along the way, the current outputs seem to have changed in a few places
so left that for posterity, and updated the number of runs which seems
harmless, though I can clean that up if preferred.
2023-06-07 19:45:21 -07:00
bnassivet
9355e3f5f5 qdrant vector store - search with relevancy scores (#5781)
Implementation of similarity_search_with_relevance_scores for quadrant
vector store.
As implemented the method is also compatible with other capacities such
as filtering.

Integration tests updated.


#### Who can review?

Tag maintainers/contributors who might be interested:

  VectorStores / Retrievers / Memory
  - @dev2049
2023-06-07 19:26:40 -07:00
Ning Ren
f15763518a docs: add Shale Protocol integration guide (#5814)
This PR adds documentation for Shale Protocol's integration with
LangChain.

[Shale Protocol](https://shaleprotocol.com) provides forever-free
production-ready inference APIs to the open-source community. We have
global data centers and plan to support all major open LLMs (estimated
~1,000 by 2025).

The team consists of software and ML engineers, AI researchers,
designers, and operators across North America and Asia. Combined
together, the team has 50+ years experience in machine learning, cloud
infrastructure, software engineering and product development. Team
members have worked at places like Google and Microsoft.

#### Who can review?

Tag maintainers/contributors who might be interested:

  - @hwchase17
  - @agola11

---------

Co-authored-by: Karen Sheng <46656667+karensheng@users.noreply.github.com>
2023-06-07 19:25:59 -07:00
Duarte OC
137da7e4b6 Update microsoft loader example with docx2txt dependency (#5832)
@eyurtsev
2023-06-07 19:21:48 -07:00
Aidan Holland
9f4b720a63 Add additional VertexAI Params (#5837)
## Changes

- Added the `stop` param to the `_VertexAICommon` class so it can be set
at llm initialization

## Example Usage

```python
VertexAI(
    # ...
    temperature=0.15,
    max_output_tokens=128,
    top_p=1,
    top_k=40,
    stop=["\n```"],
)
```

## Possible Reviewers

- @hwchase17 
- @agola11
2023-06-07 19:20:37 -07:00
Eduard van Valkenburg
76fcd96dae Add logging in PBI tool (#5841)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

Add some logging into the powerbi tool so that you can see the queries
being sent to PBI and attempts to correct them.

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested: @vowelparrot 

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-07 19:19:21 -07:00
Matt Robinson
11fec7d4d1 feat: Add UnstructuredCSVLoader for CSV files (#5844)
### Summary

Adds an `UnstructuredCSVLoader` for loading CSVs. One advantage of using
`UnstructuredCSVLoader` relative to the standard `CSVLoader` is that if
you use `UnstructuredCSVLoader` in `"elements"` mode, an HTML
representation of the table will be available in the metadata.

#### Who can review?

@hwchase17
 @eyurtsev
2023-06-07 19:18:01 -07:00
Soos3D
0b4a51930c Add how to use a custom scraping function with the sitemap loader. (#5847)
Hi! I just added an example of how to use a custom scraping function
with the sitemap loader. I recently used this feature and had to dig in
the source code to find it. I thought it might be useful to other devs
to have an example in the Jupyter Notebook directly.

I only added the example to the documentation page. 

@eyurtsev I was not able to run the lint. Please let me know if I have
to do anything else.

I know this is a very small contribution, but I hope it will be
valuable. My Twitter handle is @web3Dav3.

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-07 19:16:51 -07:00
Yessen Kanapin
c66755b661 Add DeepInfra embeddings integration with tests and examples, better exception handling for Deep Infra LLM (#5854)
#### Who can review?

Tag maintainers/contributors who might be interested:
  @hwchase17 - project lead
  - @agola11

---------

Co-authored-by: Yessen Kanapin <yessen@deepinfra.com>
2023-06-07 19:14:30 -07:00
ugfly1210
4d8cda1c3b FIX: backslash escaped (#5815)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

LatexTextSplitter needs to use "\n\\\chapter" when separators are
escaped, such as "\n\\\chapter", otherwise it will report an error:
(re.error: bad escape \c at position 1 (line 2, column 1))


Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use

re.error: bad escape \c at position 1 (line 2, column 1)

See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

@hwchase17  @dev2049 

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

Co-authored-by: Pang <ugfly@qq.com>
2023-06-07 16:01:07 -07:00
Zander Chase
3af36943e8 Rm extraneous args to the trace group helper (#5801)
These are being ignored
2023-06-07 13:09:29 -07:00
whysage
8ef7274ee6 feat: issue-5712 add sleep tool (#5715)
Fixes # 5712 added sleep tool
2023-06-07 09:39:02 -07:00
Zander Chase
d9fcc45d05 Add in the async methods and link the run id (#5810) 2023-06-07 08:27:44 -07:00
Harrison Chase
ce7c11625f bump version to 193 (#5838) 2023-06-07 07:38:57 -07:00
warjiang
5a207cce8f fix: fullfill openai params when embedding (#5821)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes #5822 
I upgrade my langchain lib by execute `pip install -U langchain`, and
the verion is 0.0.192。But i found that openai.api_base not working. I
use azure openai service as openai backend, the openai.api_base is very
import for me. I hava compared tag/0.0.192 and tag/0.0.191, and figure
out that:

![image](https://github.com/hwchase17/langchain/assets/6478745/e183fdb2-8224-45c9-b3b4-26d62823999a)
openai params is moved inside `_invocation_params` function,and used in
some openai invoke:

![image](https://github.com/hwchase17/langchain/assets/6478745/5a55a048-5fa9-4bf4-aaef-3902226bec5e)

![image](https://github.com/hwchase17/langchain/assets/6478745/85b8cebc-eeb8-4538-a525-814719c8f8df)
but still some case not covered like:

![image](https://github.com/hwchase17/langchain/assets/6478745/e0297620-f2b2-4f4f-98bd-d0ed19022dac)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:
@hwchase17 

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-07 07:32:57 -07:00
Harrison Chase
b3ae6bcd3f bump ver to 192 (#5812) 2023-06-06 22:23:11 -07:00
Harrison Chase
5468528748 rm docs mongo (#5811) 2023-06-06 22:22:44 -07:00
Andrew Switlyk
69f4ffb851 Update adding_memory.ipynb (#5806)
just change "to" to "too" so it matches the above prompt

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-06 22:10:53 -07:00
Sun bin
2be4fbb835 add doc about reusing MongoDBAtlasVectorSearch (#5805)
DOC: add doc about reusing MongoDBAtlasVectorSearch

#### Who can review?

Anyone authorized.
2023-06-06 22:10:36 -07:00
bnassivet
062c3c00a2 fixed faiss integ tests (#5808)
Fixes # 5807

Realigned tests with implementation.
Also reinforced folder unicity for the test_faiss_local_save_load test
using date-time suffix

#### Before submitting

- Integration test updated
- formatting and linting ok (locally) 

#### Who can review?

Tag maintainers/contributors who might be interested:

  @hwchase17 - project lead
  VectorStores / Retrievers / Memory
  -@dev2049
2023-06-06 22:07:27 -07:00
SvMax
92b87c2fec added support for different types in ResponseSchema class (#5789)
I added support for specifing different types with ResponseSchema
objects:

## before
`
extracted_info = ResponseSchema(name="extracted_info", description="List
of extracted information")
`
generate the following doc: ```json\n{\n\t\"extracted_info\": string //
List of extracted information}```
This brings GPT to create a JSON with only one string in the specified
field even if you requested a List in the description.

## now
`extracted_info = ResponseSchema(name="extracted_info",
type="List[string]", description="List of extracted information")
`
generate the following doc: ```json\n{\n\t\"extracted_info\":
List[string] // List of extracted information}```
This way the model responds better to the prompt generating an array of
strings.

Tag maintainers/contributors who might be interested:
  Agents / Tools / Toolkits
  @vowelparrot

Don't know who can be interested, I suppose this is a tool, so I tagged
you vowelparrot,
anyway, it's a minor change, and shouldn't impact any other part of the
framework.
2023-06-06 22:00:48 -07:00
Harrison Chase
3954bcf396 WIP: openai settings (#5792)
[] need to test more
[] make sure they arent saved when serializing
[] do for embeddings
2023-06-06 21:57:58 -07:00
Alex Lee
b7999a9bc1 Add UTF-8 json ouput support while langchain.debug is set to True. (#5802)
Before:
<img width="984" alt="image"
src="https://github.com/hwchase17/langchain/assets/4317474/2b0807b4-a1d6-4df2-87cc-92b1c8e10534">

After:
<img width="992" alt="image"
src="https://github.com/hwchase17/langchain/assets/4317474/128c2c7d-2ed5-4c95-954d-b0964c83526a">


Thanks in advance.

 @agola11
2023-06-06 21:56:33 -07:00
kourosh hakhamaneshi
a0d847f636 [Docs][Hotfix] Fix broken links (#5800)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Some links were broken from the previous merge. This PR fixes them.
Tested locally.

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

Signed-off-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>
2023-06-06 17:17:16 -07:00
Zander Chase
217b5cc72d Base RunEvaluator Chain (#5750)
Clean up a bit and only implement the QA and reference free
implementations from https://github.com/hwchase17/langchain/pull/5618
2023-06-06 16:42:15 -07:00
Lance Martin
4092fd21dc YoutubeAudioLoader and updates to OpenAIWhisperParser (#5772)
This introduces the `YoutubeAudioLoader`, which will load blobs from a
YouTube url and write them. Blobs are then parsed by
`OpenAIWhisperParser()`, as show in this
[PR](https://github.com/hwchase17/langchain/pull/5580), but we extend
the parser to split audio such that each chuck meets the 25MB OpenAI
size limit. As shown in the notebook, this enables a very simple UX:

```
# Transcribe the video to text
loader = GenericLoader(YoutubeAudioLoader([url],save_dir),OpenAIWhisperParser())
docs = loader.load()
``` 

Tested on full set of Karpathy lecture videos:

```
# Karpathy lecture videos
urls = ["https://youtu.be/VMj-3S1tku0"
        "https://youtu.be/PaCmpygFfXo",
        "https://youtu.be/TCH_1BHY58I",
        "https://youtu.be/P6sfmUTpUmc",
        "https://youtu.be/q8SA3rM6ckI",
        "https://youtu.be/t3YJ5hKiMQ0",
        "https://youtu.be/kCc8FmEb1nY"]

# Directory to save audio files 
save_dir = "~/Downloads/YouTube"
 
# Transcribe the videos to text
loader = GenericLoader(YoutubeAudioLoader(urls,save_dir),OpenAIWhisperParser())
docs = loader.load()
```
2023-06-06 15:15:08 -07:00
Gengliang Wang
2a4b32dee2 Revise DATABRICKS_API_TOKEN as DATABRICKS_TOKEN (#5796)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

In the [Databricks
integration](https://python.langchain.com/en/latest/integrations/databricks.html)
and [Databricks
LLM](https://python.langchain.com/en/latest/modules/models/llms/integrations/databricks.html),
we suggestted users to set the ENV variable `DATABRICKS_API_TOKEN`.
However, this is inconsistent with the other Databricks library. To make
it consistent, this PR changes the variable from `DATABRICKS_API_TOKEN`
to `DATABRICKS_TOKEN`

After changes, there is no more `DATABRICKS_API_TOKEN` in the doc
```
$ git grep DATABRICKS_API_TOKEN|wc -l
0

$ git grep DATABRICKS_TOKEN|wc -l
8
```
cc @hwchase17 @dev2049 @mengxr since you have reviewed the previous PRs.
2023-06-06 14:22:49 -07:00
Paul-Emile Brotons
daf3e99b96 fixing from_documents method of the MongoDB Atlas vector store (#5794)
FIxed a bug in from_documents method --> Collection objects do not
implement truth value testing or bool().
@dev2049
2023-06-06 14:22:23 -07:00
Ankush Gola
b177a29d3f support returning run info for llms, chat models and chains (#5666)
returning the run id is important for accessing the run later on
2023-06-06 10:07:46 -07:00
Yoann Poupart
65111eb2b3 Attribute support for html tags (#5782)
# What does this PR do?

Change the HTML tags so that a tag with attributes can be found.

## Before submitting

- [x] Tests added
- [x] CI/CD validated

### Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.
2023-06-06 09:27:37 -07:00
Zander Chase
0cfaa76e45 Set Falsey (#5783)
Seems natural to try to disable logging by setting `MY_VAR=false` rather
than unsetting (especially once you've already set it in the background)
2023-06-06 09:26:38 -07:00
Harrison Chase
2ae2d6cd1d fix ver 191 (#5784) 2023-06-06 09:17:23 -07:00
Zander Chase
204a73c1d9 Use client from LCP-SDK (#5695)
- Remove the client implementation (this breaks backwards compatibility
for existing testers. I could keep the stub in that file if we want, but
not many people are using it yet
- Add SDK as dependency
- Update the 'run_on_dataset' method to be a function that optionally
accepts a client as an argument
- Remove the langchain plus server implementation (you get it for free
with the SDK now)

We could make the SDK optional for now, but the plan is to use w/in the
tracer so it would likely become a hard dependency at some point.
2023-06-06 06:51:05 -07:00
Harrison Chase
08e2352f7b bump ver 191 (#5766) 2023-06-05 20:54:08 -07:00
berkedilekoglu
f907b62526 Scores are explained in vectorestore docs (#5613)
# Scores in Vectorestores' Docs Are Explained

Following vectorestores can return scores with similar documents by
using `similarity_search_with_score`:
- chroma
- docarray_hnsw
- docarray_in_memory
- faiss
- myscale
- qdrant
- supabase
- vectara
- weaviate

However, in documents, these scores were either not explained at all or
explained in a way that could lead to misunderstandings (e.g., FAISS).
For instance in FAISS document: if we consider the score returned by the
function as a similarity score, we understand that a document returning
a higher score is more similar to the source document. However, since
the scores returned by the function are distance scores, we should
understand that smaller scores correspond to more similar documents.

For the libraries other than Vectara, I wrote the scores they use by
investigating from the source libraries. Since I couldn't be certain
about the score metric used by Vectara, I didn't make any changes in its
documentation. The links mentioned in Vectara's documentation became
broken due to updates, so I replaced them with working ones.

VectorStores / Retrievers / Memory
  - @dev2049

my twitter: [berkedilekoglu](https://twitter.com/berkedilekoglu)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-05 20:39:49 -07:00
Adil Ansari
233b52735e feat: Support for Tigris Vector Database for vector search (#5703)
### Changes
- New vector store integration - [Tigris](https://tigrisdata.com)
- Adds [tigrisdb](https://pypi.org/project/tigrisdb/) optional
dependency
- Example notebook demonstrating usage

Fixes #5535 
Closes tigrisdata/tigris-client-python#40

#### Twitter handles
We'd love a shoutout on our
[@TigrisData](https://twitter.com/TigrisData) and
[@adilansari](https://twitter.com/adilansari) twitter handles

#### Who can review?
@dev2049

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-05 20:39:16 -07:00
Edrick Da Corte Henriquez
38dabdbb3a Update tutorials.md (#5761)
# Added an overview of LangChain modules

Aimed at introducing newcomers to LangChain's main modules :)

Twitter handle is @edrick_dch 

## Who can review?

@eyurtsev
2023-06-05 20:37:11 -07:00
Ankush Gola
84a46753ab Tracing Group (#5326)
Add context manager to group all runs under a virtual parent

---------

Co-authored-by: vowelparrot <130414180+vowelparrot@users.noreply.github.com>
2023-06-05 19:18:43 -07:00
Ilya
d5b1608216 fix markdown text splitter horizontal lines (#5625)
Fixes #5614 

#### Issue

The `***` combination produces an exception when used as a seperator in
`re.split`. Instead `\*\*\*` should be used for regex exprations.

#### Who can review?

@eyurtsev
2023-06-05 16:40:26 -07:00
Harrison Chase
25487fa5ee Harrison/youtube multi language (#5758)
Co-authored-by: rafly lesmana <raflylesmana111@gmail.com>
2023-06-05 16:38:07 -07:00
Shelby Jenkins
2dcda8a8ac Strips whitespace and \n from loc before filtering urls from sitemap (#5728)
Fixes #5699 



#### Who can review?

Tag maintainers/contributors who might be interested:

@woodworker @LeSphax @johannhartmann

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-05 16:33:55 -07:00
Harrison Chase
98dd6d068a cohere retries (#5757)
…719)

A minor update to retry Cohore API call in case of errors using tenacity
as it is done for OpenAI LLMs.

#### Who can review?

@hwchase17, @agola11 

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: Sagar Sapkota <22609549+sagar-spkt@users.noreply.github.com>
2023-06-05 16:28:58 -07:00
M Waleed Kadous
5124c1e0d9 Add aviary support (#5661)
Aviary is an open source toolkit for evaluating and deploying open
source LLMs. You can find out more about it on
[http://github.com/ray-project/aviary). You can try it out at
[http://aviary.anyscale.com](aviary.anyscale.com).

This code adds support for Aviary in LangChain. To minimize
dependencies, it connects directly to the HTTP endpoint.

The current implementation is not accelerated and uses the default
implementation of `predict` and `generate`.

It includes a test and a simple example. 

@hwchase17 and @agola11 could you have a look at this?

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-05 16:28:42 -07:00
felpigeon
a47c8618ec Add class attribute "return_generated_question" to class "BaseConversationalRetrievalChain" (#5749)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

Adding a class attribute "return_generated_question" to class
"BaseConversationalRetrievalChain". If set to `True`, the chain's output
has a key "generated_question" with the question generated by the
sub-chain `question_generator` as the value. This way the generated
question can be logged.

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
@dev2049 @vowelparrot
2023-06-05 16:10:12 -07:00
Leonid Ganeline
87ad4fc4b2 docs: updated ecosystem/dependents (#5753)
updated `ecosystem/dependents` data (it was updated 2+ weeks ago)

#### Who can review?

@hwchase17 
@eyurtsev
@dev2049
2023-06-05 16:09:55 -07:00
Leonid Ganeline
92a5f00ffb docs: ecosystem/integrations update 5 (#5752)
- added missed integration to `docs/ecosystem/integrations/`
- updated notebooks to consistent format: changed titles, file names;
added descriptions

#### Who can review?
 @hwchase17 
 @dev2049
2023-06-05 16:08:55 -07:00
Lance Martin
aea090045b Create OpenAIWhisperParser for generating Documents from audio files (#5580)
# OpenAIWhisperParser

This PR creates a new parser, `OpenAIWhisperParser`, that uses the
[OpenAI Whisper
model](https://platform.openai.com/docs/guides/speech-to-text/quickstart)
to perform transcription of audio files to text (`Documents`). Please
see the notebook for usage.
2023-06-05 15:51:13 -07:00
Hao Chen
a4c9053d40 Integrate Clickhouse as Vector Store (#5650)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

#### Description

This PR is mainly to integrate open source version of ClickHouse as
Vector Store as it is easy for both local development and adoption of
LangChain for enterprises who already have large scale clickhouse
deployment.

ClickHouse is a open source real-time OLAP database with full SQL
support and a wide range of functions to assist users in writing
analytical queries. Some of these functions and data structures perform
distance operations between vectors, [enabling ClickHouse to be used as
a vector
database](https://clickhouse.com/blog/vector-search-clickhouse-p1).
Recently added ClickHouse capabilities like [Approximate Nearest
Neighbour (ANN)
indices](https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/annindexes)
support faster approximate matching of vectors and provide a promising
development aimed to further enhance the vector matching capabilities of
ClickHouse.

In LangChain, some ClickHouse based commercial variant vector stores
like
[Chroma](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/chroma.py)
and
[MyScale](https://github.com/hwchase17/langchain/blob/master/langchain/vectorstores/myscale.py),
etc are already integrated, but for some enterprises with large scale
Clickhouse clusters deployment, it will be more straightforward to
upgrade existing clickhouse infra instead of moving to another similar
vector store solution, so we believe it's a valid requirement to
integrate open source version of ClickHouse as vector store.

As `clickhouse-connect` is already included by other integrations, this
PR won't include any new dependencies.

#### Before submitting

<!-- If you're adding a new integration, please include:

1. Added a test for the integration:
https://github.com/haoch/langchain/blob/clickhouse/tests/integration_tests/vectorstores/test_clickhouse.py
2. Added an example notebook and document showing its use: 
* Notebook:
https://github.com/haoch/langchain/blob/clickhouse/docs/modules/indexes/vectorstores/examples/clickhouse.ipynb
* Doc:
https://github.com/haoch/langchain/blob/clickhouse/docs/integrations/clickhouse.md

See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

1. Added a test for the integration:
https://github.com/haoch/langchain/blob/clickhouse/tests/integration_tests/vectorstores/test_clickhouse.py
2. Added an example notebook and document showing its use: 
* Notebook:
https://github.com/haoch/langchain/blob/clickhouse/docs/modules/indexes/vectorstores/examples/clickhouse.ipynb
* Doc:
https://github.com/haoch/langchain/blob/clickhouse/docs/integrations/clickhouse.md


#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
 
@hwchase17 @dev2049 Could you please help review?

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-05 13:32:04 -07:00
Gustavo Brian
2f2d27fd82 Error in documentation: Chroma constructor (#5731)
Chroma("langchain_store", embeddings.embed_query) must be
Chroma("langchain_store", embeddings)
2023-06-05 13:30:58 -07:00
George Geddes
019eb13681 Fix a typo in the documentation for the Slack document loader (#5745)
Fixes a typo I noticed while reading the docs.
2023-06-05 13:30:24 -07:00
Andrew Grangaard
450eb91fe2 Removes unnecessary backslash escaping for backticks in python (#5751)
Fixed python deprecation warning:
    DeprecationWarning: invalid escape sequence '`'
    
backticks (`) do not have special meaning in python strings and should
not be escaped.

-- @spazm on twitter

### Who can review:

@nfcampos ported this change from javascript, @hwchase17 wrote the
original STRUCTURED_FORMAT_INSTRUCTIONS,
2023-06-05 13:30:11 -07:00
Daniel Chalef
0551bc90a5 Zep Hybrid Search (#5742)
Zep now supports persisting custom metadata with messages and hybrid
search across both message embeddings and structured metadata. This PR
implements custom metadata and enhancements to the
`ZepChatMessageHistory` and `ZepRetriever` classes to implement this
support.

Tag maintainers/contributors who might be interested:

  VectorStores / Retrievers / Memory
  - @dev2049

---------

Co-authored-by: Daniel Chalef <daniel.chalef@private.org>
2023-06-05 12:59:28 -07:00
Tomaz Bratanic
a0ea6f6b6b Cypher search: Check if generated Cypher is provided in backticks (#5541)
# Check if generated Cypher code is wrapped in backticks

Some LLMs like the VertexAI like to explain how they generated the
Cypher statement and wrap the actual code in three backticks:

![Screenshot from 2023-06-01
08-08-23](https://github.com/hwchase17/langchain/assets/19948365/1d8eecb3-d26c-4882-8f5b-6a9bc7e93690)


I have observed a similar pattern with OpenAI chat models in a
conversational settings, where multiple user and assistant message are
provided to the LLM to generate Cypher statements, where then the LLM
wants to maybe apologize for previous steps or explain its thoughts.
Interestingly, both OpenAI and VertexAI wrap the code in three backticks
if they are doing any explaining or apologizing. Checking if the
generated cypher is wrapped in backticks seems like a low-hanging fruit
to expand the cypher search to other LLMs and conversational settings.
2023-06-05 12:48:13 -07:00
Abhijeet Malamkar
1a9ac3b1f9 Adding support to save multiple memories at a time. Cuts save time by … (#5172)
# Adding support to save multiple memories at a time. Cuts save time by
more then half

<!--
Thank you for contributing to LangChain! Your PR will appear in our next
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

        @hwchase17 - project lead

        Tracing / Callbacks
        - @agola11

        Async
        - @agola11

        DataLoaders
        - @eyurtsev

        Models
        - @hwchase17
        - @agola11

  
        -
        
        VectorStores / Retrievers / Memory
        - @dev2049
        
 -->
@dev2049
 @vowelparrot

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-05 12:47:48 -07:00
kourosh hakhamaneshi
625717daa8 docs: Added Deploying LLMs into production + a new ecosystem (#4047)
Signed-off-by: Kourosh Hakhamaneshi <kourosh@anyscale.com>
Co-authored-by: Kamil Kaczmarek <kaczmarek.poczta@gmail.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-05 12:47:27 -07:00
Ralph Schlosser
74f8e603d9 Addresses GPT4All wrapper model_type attribute issues #5720. (#5743)
Fixes #5720.

A more in-depth discussion is in my comment here:
https://github.com/hwchase17/langchain/issues/5720#issuecomment-1577047018

In a nutshell, there has been a subtle change in the latest version of
GPT4Alls Python bindings. The change I submitted yesterday is compatible
with this version, however, this version is as of yet unreleased and
thus the code change breaks Langchain's wrapper under the currently
released version of GPT4All.

This pull request proposes a backwards-compatible solution.
2023-06-05 12:45:29 -07:00
Harrison Chase
d0d89d39ef bump version to 190 (#5704) 2023-06-04 20:04:50 -07:00
mheguy-stingray
b64c39dfe7 top_k and top_p transposed in vertexai (#5673)
Fix transposed properties in vertexai model


Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-04 16:59:53 -07:00
Tobias Herbold
3fb0e4872a sqlalchemy MovedIn20Warning declarative_base DEPRICATION fix (#5676)
fix for the sqlalchemy deprecated declarative_base import :

```
MovedIn20Warning: The ``declarative_base()`` function is now available as sqlalchemy.orm.declarative_base(). (deprecated since: 2.0) (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
  Base = declarative_base()  # type: Any
```

Import is wrapped in an try catch Block to fallback to the old import if
needed.

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-04 16:52:52 -07:00
Jens Madsen
8d9e9e013c refactor: extract token text splitter function (#5179)
# Token text splitter for sentence transformers

The current TokenTextSplitter only works with OpenAi models via the
`tiktoken` package. This is not clear from the name `TokenTextSplitter`.
In this (first PR) a token based text splitter for sentence transformer
models is added. In the future I think we should work towards injecting
a tokenizer into the TokenTextSplitter to make ti more flexible.
Could perhaps be reviewed by @dev2049

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-04 14:41:44 -07:00
Nathan Azrak
26ec845921 Raise an exception in MKRL and Chat Output Parsers if parsing text which contains both an action and a final answer (#5609)
Raises exception if OutputParsers receive a response with both a valid
action and a final answer

Currently, if an OutputParser receives a response which includes both an
action and a final answer, they return a FinalAnswer object. This allows
the parser to accept responses which propose an action and hallucinate
an answer without the action being parsed or taken by the agent.

This PR changes the logic to:
1. store a variable checking whether a response contains the
`FINAL_ANSWER_ACTION` (this is the easier condition to check).
2. store a variable checking whether the response contains a valid
action
3. if both are present, raise a new exception stating that both are
present
4. if an action is present, return an AgentAction
5. if an answer is present, return an AgentAnswer
6. if neither is present, raise the relevant exception based around the
action format (these have been kept consistent with the prior exception
messages)

Disclaimer:
* Existing mock data included strings which did include an action and an
answer. This might indicate that prioritising returning AgentAnswer was
always correct, and I am patching out desired behaviour? @hwchase17 to
advice. Curious if there are allowed cases where this is not
hallucinating, and we do want the LLM to output an action which isn't
taken.
* I have not passed `send_to_llm` through this new exception

Fixes #5601 

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

@hwchase17 - project lead
@vowelparrot
2023-06-04 14:40:49 -07:00
Lucas Rodrigues
c112d7334d Update MongoDBChatMessageHistory to create an index on SessionId (#5632)
All the queries to the database are done based on the SessionId
property, this will optimize how Mongo retrieves all messages from a
session

#### Who can review?

Tag maintainers/contributors who might be interested:
@dev2049
2023-06-04 14:39:56 -07:00
Jason Weill
6c11f94013 Retitles Bedrock doc to appear in correct alphabetical order in site nav (#5639)
<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes #5638. Retitles "Amazon Bedrock" page to "Bedrock" so that the
Integrations section of the left nav is properly sorted in alphabetical
order.

#### Who can review?

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-04 14:39:25 -07:00
Will Smith
6e25e65085 SQL agent : Improved prompt engineering prevents agent guessing database column names. (#5671)
@vowelparrot:

Minor change to the SQL agent:

Tells agent to introspect the schema of the most relevant tables, I
found this to dramatically decrease the chance that the agent wastes
times guessing column names.
2023-06-04 14:39:00 -07:00
Nuhman Pk
8f98592ac9 Added Dependencies Status, Open issues and releases badges in Readme.md (#5681)
[![Dependency
Status](https://img.shields.io/librariesio/github/hwchase17/langchain)](https://libraries.io/github/hwchase17/langchain)
[![Open
Issues](https://img.shields.io/github/issues-raw/hwchase17/langchain)](https://github.com/hwchase17/langchain/issues)
[![Release
Notes](https://img.shields.io/github/release/hwchase17/langchain)](https://github.com/hwchase17/langchain/releases)
2023-06-04 14:30:52 -07:00
Harrison Chase
b9040669a0 Harrison/pipeline prompt (#5540)
idea is to make prompts more composable
2023-06-04 14:29:37 -07:00
George Roberts
647210a4b9 Add args_schema to google_places tool (#5680)
Tiny change to actually add the args_schema to the tool.

@vowelparrot
2023-06-04 14:28:46 -07:00
Ralph Schlosser
8fea0529c1 This fixes issue #5651 - GPT4All wrapper loading issue (#5657)
Fixes #5651 

Small typo in wrapper code. Note the `model_type` parameter is currently
unused by GPT4All.

https://github.com/hwchase17/langchain/issues/5651

#### Who can review?
2023-06-04 07:21:16 -07:00
Jiayao Yu
6a3ceaa377 Support similarity_score_threshold retrieval with Chroma (#5655)
Fixes https://github.com/hwchase17/langchain/issues/5067

Verified the following code now works correctly:
```
db = Chroma(persist_directory=index_directory(index_name), embedding_function=embeddings)
retriever = db.as_retriever(search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.4})
docs = retriever.get_relevant_documents(query)
```
2023-06-03 16:57:00 -07:00
Hao Chen
3e45b83065 Improve Error Messaging for APOC Procedure Failure in Neo4jGraph (#5547)
## Improve Error Messaging for APOC Procedure Failure in Neo4jGraph

This commit revises the error message provided when the
'apoc.meta.data()' procedure fails. Previously, the message simply
instructed the user to install the APOC plugin in Neo4j. The new error
message is more specific.

Also removed an unnecessary newline in the Cypher statement variable:
`node_properties_query`.

Fixes #5545 

## Who can review?
  - @vowelparrot
  - @dev2049
2023-06-03 16:56:39 -07:00
Ricardo Reis
33ea606f45 Update youtube.py - Fix metadata validation error in YoutubeLoader (#5479)
This commit addresses a ValueError occurring when the YoutubeLoader
class tries to add datetime metadata from a YouTube video's publish
date. The error was happening because the ChromaDB metadata validation
only accepts str, int, or float data types.

In the `_get_video_info` method of the `YoutubeLoader` class, the
publish date retrieved from the YouTube video was of datetime type. This
commit fixes the issue by converting the datetime object to a string
before adding it to the metadata dictionary.

Additionally, this commit introduces error handling in the
`_get_video_info` method to ensure that all metadata fields have valid
values. If a metadata field is found to be None, a default value is
assigned. This prevents potential errors during metadata validation when
metadata fields are None.

The file modified in this commit is youtube.py.

# Your PR Title (What it does)

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2023-06-03 16:56:17 -07:00
Shuqian
5af2c51e78 refactor: BaseStringMessagePromptTemplate from_template method (#5332)
# refactor BaseStringMessagePromptTemplate from_template method 

Refactor the `from_template` method of the
`BaseStringMessagePromptTemplate` class to allow passing keyword
arguments to the `from_template` method of `PromptTemplate`.
Enable the usage of arguments like `template_format`.
In my scenario, I intend to utilize Jinja2 for formatting the human
message prompt in the chat template.

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Models
  - @hwchase17
  - @agola11
  - @jonasalexander 

 -->

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-03 16:55:58 -07:00
mbchang
d3bdb8ea6d FileCallbackHandler (#5589)
# like
[StdoutCallbackHandler](https://github.com/hwchase17/langchain/blob/master/langchain/callbacks/stdout.py),
but writes to a file

When running experiments I have found myself wanting to log the outputs
of my chains in a more lightweight way than using WandB tracing. This PR
contributes a callback handler that writes to file what
`StdoutCallbackHandler` would print.

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.

Finally, we'd love to show appreciation for your contribution - if you'd
like us to shout you out on Twitter, please also include your handle!
-->

## Example Notebook

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use



See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

See the included `filecallbackhandler.ipynb` notebook for usage. Would
it be better to include this notebook under `modules/callbacks` or under
`integrations/`?

![image](https://github.com/hwchase17/langchain/assets/6439365/c624de0e-343f-4eab-a55b-8808a887489f)


## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

@agola11

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->
2023-06-03 16:48:48 -07:00
rajib
1c51d3db0f Created fix for 5475 (#5659)
Created fix for 5475
Currently in PGvector, we do not have any function that returns the
instance of an existing store. The from_documents always adds embeddings
and then returns the store. This fix is to add a function that will
return the instance of an existing store

Also changed the jupyter example for PGVector to show the example of
using the function

<!-- Remove if not applicable -->

Fixes # 5475

#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?
@dev2049
@hwchase17 

Tag maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049

 -->

---------

Co-authored-by: rajib76 <rajib76@yahoo.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-03 16:47:52 -07:00
Michael Landis
475007d63a fix: correct momento chat history notebook typo and title (#5646)
This PR corrects a minor typo in the Momento chat message history
notebook and also expands the title from "Momento" to "Momento Chat
History", inline with other chat history storage providers.


#### Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

#### Who can review?

cc @dev2049 who reviewed the original integration
2023-06-03 16:39:27 -07:00
Paul-Emile Brotons
92f218207b removing client+namespace in favor of collection (#5610)
removing client+namespace in favor of collection for an easier
instantiation and to be similar to the typescript library

@dev2049
2023-06-03 16:27:31 -07:00
Harrison Chase
ad09367a92 Harrison/pubmed integration (#5664)
Co-authored-by: younis basher <71520361+younis-ba@users.noreply.github.com>
Co-authored-by: Younis Bashir <younis@omicmd.com>
2023-06-03 16:25:28 -07:00
Harrison Chase
9921f8cc3a Harrison/update azure nb (#5665)
Co-authored-by: NEWTON MALLICK <38786893+N-E-W-T-O-N@users.noreply.github.com>
2023-06-03 16:25:08 -07:00
C.J. Jameson
4e71a1702b nit: pgvector python example notebook, fix variable reference (#5595)
# Your PR Title (What it does)

Fixes the pgvector python example notebook : one of the variables was
not referencing anything

## Before submitting

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

VectorStores / Retrievers / Memory
  - @dev2049
2023-06-03 15:29:34 -07:00
Leonid Ganeline
b201cfaa0f docs ecosystem/integrations update 4 (#5590)
# docs `ecosystem/integrations` update 4

Added missed integrations. Fixed inconsistencies. 

## Who can review?

@hwchase17 
@dev2049
2023-06-03 15:29:03 -07:00
Davis Chase
ae3611730a handle single arg to and/or (#5637)
@ryderwishart @eyurtsev thoughts on handling this in the parser itself?
related to #5570
2023-06-03 15:18:46 -07:00
khallbobo
934319fc28 Add parameters to send_message() call for vertexai chat models (PaLM2) (#5566)
# Ensure parameters are used by vertexai chat models (PaLM2)

The current version of the google aiplatform contains a bug where
parameters for a chat model are not used as intended.

See https://github.com/googleapis/python-aiplatform/issues/2263

Params can be passed both to start_chat() and send_message(); however,
the parameters passed to start_chat() will not be used if send_message()
is called without the overrides. This is due to the defaults in
send_message() being global values rather than None (there is code in
send_message() which would use the params from start_chat() if the param
passed to send_message() evaluates to False, but that won't happen as
the defaults are global values).

Fixes # 5531

@hwchase17
@agola11
2023-06-03 15:17:38 -07:00
UmerHA
44ad9628c9 QuickFix for FinalStreamingStdOutCallbackHandler: Ignore new lines & white spaces (#5497)
# Make FinalStreamingStdOutCallbackHandler more robust by ignoring new
lines & white spaces

`FinalStreamingStdOutCallbackHandler` doesn't work out of the box with
`ChatOpenAI`, as it tokenized slightly differently than `OpenAI`. The
response of `OpenAI` contains the tokens `["\nFinal", " Answer", ":"]`
while `ChatOpenAI` contains `["Final", " Answer", ":"]`.

This PR make `FinalStreamingStdOutCallbackHandler` more robust by
ignoring new lines & white spaces when determining if the answer prefix
has been reached.

Fixes #5433

## Who can review?
Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
Tracing / Callbacks
- @agola11

Twitter: [@UmerHAdil](https://twitter.com/@UmerHAdil) | Discord:
RicChilligerDude#7589
2023-06-03 15:05:58 -07:00
Nathan Azrak
1f4abb265a Adds the option to pass the original prompt into the AgentExecutor for PlanAndExecute agents (#5401)
# Adds the option to pass the original prompt into the AgentExecutor for
PlanAndExecute agents

This PR allows the user to optionally specify that they wish for the
original prompt/objective to be passed into the Executor agent used by
the PlanAndExecute agent. This solves a potential problem where the plan
is formed referring to some context contained in the original prompt,
but which is not included in the current prompt.

Currently, the prompt format given to the Executor is:
```
System: Respond to the human as helpfully and accurately as possible. You have access to the following tools:

<Tool and Action Description>

<Output Format Description>

Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.
Thought:
Human: <Previous steps>

<Current step>
```

This PR changes the final part after `Human:` to optionally insert the
objective:
```
Human: <objective>

<Previous steps>

<Current step>
```

I have given a specific example in #5400 where the context of a database
path is lost, since the plan refers to the "given path".

The PR has been linted and formatted. So that existing behaviour is not
changed, I have defaulted the argument to `False` and added it as the
last argument in the signature, so it does not cause issues for any
users passing args positionally as opposed to using keywords.

Happy to take any feedback or make required changes! 

Fixes #5400

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
@vowelparrot

---------

Co-authored-by: Nathan Azrak <nathan.azrak@gmail.com>
2023-06-03 14:59:09 -07:00
Felipe Ferreira
ae2cf1f598 Implements support for Personal Access Token Authentication in the ConfluenceLoader (#5385)
# Implements support for Personal Access Token Authentication in the
ConfluenceLoader

Fixes #5191

Implements a new optional parameter for the ConfluenceLoader: `token`.
This allows the use of personal access authentication when using the
on-prem server version of Confluence.

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
@eyurtsev @Jflick58 

Twitter Handle: felipe_yyc

---------

Co-authored-by: Felipe <feferreira@ea.com>
Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-03 14:57:49 -07:00
Gardner Bickford
b81f98b8a6 Update confluence.py to return spaces between elements (#5383)
# Update confluence.py to return spaces between elements like headers
and links.

Please see
https://stackoverflow.com/questions/48913975/how-to-return-nicely-formatted-text-in-beautifulsoup4-when-html-text-is-across-m

Given:

```html
<address>
        183 Main St<br>East Copper<br>Massachusetts<br>U S A<br>
        MA 01516-113
    </address>
```

The document loader currently returns:

```
'183 Main StEast CopperMassachusettsU S A        MA 01516-113'
```

After this change, the document loader will return:

```
183 Main St East Copper Massachusetts U S A MA 01516-113
```


@eyurtsev would you prefer this to be an option that can be passed in?
2023-06-03 14:57:25 -07:00
Zeeland
b72401b47b pref: reduce DB query error rate (#5339)
# Reduce DB query error rate

If you use sql agent of `SQLDatabaseToolkit` to query data, it is prone
to errors in query fields and often uses fields that do not exist in
database tables for queries. However, the existing prompt does not
effectively make the agent aware that there are problems with the fields
they query. At this time, we urgently need to improve the prompt so that
the agent realizes that they have queried non-existent fields and allows
them to use the `schema_sql_db`, that is,` ListSQLDatabaseTool` first
queries the corresponding fields in the table in the database, and then
uses `QuerySQLDatabaseTool` for querying.

There is a demo of my project to show this problem.

**Original Agent**

```python
def create_mysql_kit():
    db = SQLDatabase.from_uri("mysql+pymysql://xxxxxxx")
    llm = OpenAI(temperature=0)

    toolkit = SQLDatabaseToolkit(db=db, llm=llm)
    agent_executor = create_sql_agent(
        llm=OpenAI(temperature=0),
        toolkit=toolkit,
        verbose=True
    )
    agent_executor.run("Who are the users of sysuser in this system? Tell me the username of all users")


if __name__ == '__main__':
    create_mysql_kit()

```

**original output**

```
> Entering new AgentExecutor chain...
Action: list_tables_sql_db
Action Input: ""
Observation: app_sysrole_menus, app_bimfacemodel, app_project_users, app_measuringpointdata, auth_user, auth_user_groups, django_apscheduler_djangojobexecution, app_project, app_elementpoint, django_apscheduler_djangojob, django_content_type, app_sysrole, django_admin_log, app_bimfaceaccount, app_measuringpoint_warning_thresholds, app_measuringpoint, app_company, auth_group_permissions, app_sysuser, app_sysuser_companies, app_sysmenu, app_datawarningthreshold, auth_group, auth_permission, app_datawarningrecord, auth_user_user_permissions, app_bimfaceaccount_bimface_models, django_migrations, app_measuringitem, django_session
Thought: I should query the app_sysuser table to get the users in the system.
Action: query_sql_db
Action Input: SELECT username FROM app_sysuser LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'username' in 'field list'")
[SQL: SELECT username FROM app_sysuser LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT username FROM app_sysuser LIMIT 10;
Observation: 

SELECT username FROM app_sysuser LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT username FROM app_sysuser LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'username' in 'field list'")
[SQL: SELECT username FROM app_sysuser LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT name FROM app_sysuser LIMIT 10;
Observation: 

SELECT name FROM app_sysuser LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT name FROM app_sysuser LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'name' in 'field list'")
[SQL: SELECT name FROM app_sysuser LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT id, username FROM app_sysuser LIMIT 10;
Observation: 

SELECT id, username FROM app_sysuser LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT id, username FROM app_sysuser LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'id' in 'field list'")
[SQL: SELECT id, username FROM app_sysuser LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT user_id, username FROM app_sysuser LIMIT 10;
Observation: 

SELECT user_id, username FROM app_sysuser WHERE user_id IS NOT NULL AND user_id BETWEEN 0 AND 10 UNION ALL SELECT user_id, username FROM app_sysuser WHERE username IS NOT NULL ORDER BY user_id ASC LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT user_id, username FROM app_sysuser WHERE user_id IS NOT NULL AND user_id BETWEEN 0 AND 10 UNION ALL SELECT user_id, username FROM app_sysuser WHERE username IS NOT NULL ORDER BY user_id ASC LIMIT 10;
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'username' in 'field list'")
[SQL: SELECT user_id, username FROM app_sysuser WHERE user_id IS NOT NULL AND user_id BETWEEN 0 AND 10 UNION ALL SELECT user_id, username FROM app_sysuser WHERE username IS NOT NULL ORDER BY user_id ASC LIMIT 10;]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should double check my query before executing it.
Action: query_checker_sql_db
Action Input: SELECT user_id, user_name FROM app_sysuser LIMIT 10;
Observation: 

SELECT user_id, user_name FROM app_sysuser WHERE user_id NOT IN (NULL) AND user_name BETWEEN 'A' AND 'Z' UNION ALL SELECT user_id, user_name FROM app_sysuser WHERE user_id IS NOT NULL AND user_name LIKE '%a%' LIMIT 10;
Thought: The query looks correct, so I should execute it.
Action: query_sql_db
Action Input: SELECT user_id, user_name FROM app_sysuser WHERE user_id NOT IN (NULL) AND user_name BETWEEN 'A' AND 'Z' UNION ALL SELECT user_id, user_name FROM app_sysuser WHERE user_id IS NOT NULL AND user_name LIKE '%a%' LIMIT 10;
Observation: []
Thought: I now know the final answer.
Final Answer: There are no users in the sysuser table.

> Finished chain.

Process finished with exit code 0
```

As you can see, agent always use incorrect sql to query the data.
Actually, the username field name of my `app_sysuser` table is
`user_name`, not the `username` believed by the agent.

**Optimized agent output**

```
> Entering new AgentExecutor chain...
Action: list_tables_sql_db
Action Input: ""
Observation: app_project_users, django_content_type, app_measuringpoint_warning_thresholds, app_bimfaceaccount_bimface_models, django_session, app_sysuser, django_migrations, app_bimfacemodel, app_elementpoint, app_measuringpoint, django_apscheduler_djangojobexecution, auth_permission, app_sysuser_companies, app_measuringpointdata, app_measuringitem, app_sysrole_menus, django_admin_log, auth_group, django_apscheduler_djangojob, app_sysmenu, app_project, app_bimfaceaccount, app_datawarningthreshold, app_datawarningrecord, auth_user_groups, auth_user_user_permissions, auth_group_permissions, auth_user, app_company, app_sysrole
Thought: I should query the app_sysuser table to get the usernames of all the users.
Action: query_sql_db
Action Input: SELECT username FROM app_sysuser
Observation: Error: (pymysql.err.OperationalError) (1054, "Unknown column 'username' in 'field list'")
[SQL: SELECT username FROM app_sysuser]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
Thought: I should use the schema_sql_db tool to get the correct table fields.
Action: schema_sql_db
Action Input: app_sysuser
Observation: 
CREATE TABLE app_sysuser (
	user_id INTEGER NOT NULL AUTO_INCREMENT, 
	user_account VARCHAR(50) COLLATE utf8mb4_general_ci NOT NULL, 
	user_password VARCHAR(255) COLLATE utf8mb4_general_ci NOT NULL, 
	user_name VARCHAR(50) COLLATE utf8mb4_general_ci NOT NULL, 
	user_avatar VARCHAR(255) COLLATE utf8mb4_general_ci, 
	user_email VARCHAR(254) COLLATE utf8mb4_general_ci, 
	user_phone VARCHAR(20) COLLATE utf8mb4_general_ci, 
	user_create_time DATETIME(6) NOT NULL, 
	user_update_time DATETIME(6) NOT NULL, 
	user_status SMALLINT, 
	role_id_id INTEGER, 
	PRIMARY KEY (user_id), 
	CONSTRAINT app_sysuser_role_id_id_51b50f39_fk_app_sysrole_role_id FOREIGN KEY(role_id_id) REFERENCES app_sysrole (role_id)
)COLLATE utf8mb4_general_ci DEFAULT CHARSET=utf8mb4 ENGINE=InnoDB

/*
3 rows from app_sysuser table:
user_id	user_account	user_password	user_name	user_avatar	user_email	user_phone	user_create_time	user_update_time	user_status	role_id_id
xxxxxxxxxxxxxx
*/
Thought: I should query the app_sysuser table to get the usernames of all the users.
Action: query_sql_db
Action Input: SELECT user_account FROM app_sysuser LIMIT 10
Observation: [('baiyun',), ('eatrice',), ('lisi',), ('pingxiang',), ('wangwu',), ('zeeland',), ('zsj',), ('zzw',)]
Thought: I now know the final answer
Final Answer: The usernames of the users in the sysuser table are baiyun, eatrice, lisi, pingxiang, wangwu, zeeland, zsj, and zzw.

> Finished chain.

Process finished with exit code 0

```

I have tested about 10 related prompts and they all work properly, with
a much lower error rate compared to before


## Who can review?

@vowelparrot

---------

Co-authored-by: Dev 2049 <dev.dev2049@gmail.com>
2023-06-03 14:55:56 -07:00
mbchang
ce6dbe41a9 minor refactor GenerativeAgentMemory (#5315)
# minor refactor of GenerativeAgentMemory

<!--
Thank you for contributing to LangChain! Your PR will appear in our
release under the title you set. Please make sure it highlights your
valuable contribution.

Replace this with a description of the change, the issue it fixes (if
applicable), and relevant context. List any dependencies required for
this change.

After you're done, someone will review your PR. They may suggest
improvements. If no one reviews your PR within a few days, feel free to
@-mention the same people again, as notifications can get lost.
-->

<!-- Remove if not applicable -->

- refactor `format_memories_detail` to be more reusable
- modified prompts for getting topics for reflection and for generating
insights
- update `characters.ipynb` to reflect changes

## Before submitting

<!-- If you're adding a new integration, please include:

1. a test for the integration - favor unit tests that does not rely on
network access.
2. an example notebook showing its use


See contribution guidelines for more information on how to write tests,
lint
etc:


https://github.com/hwchase17/langchain/blob/master/.github/CONTRIBUTING.md
-->

## Who can review?

Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:

<!-- For a quicker response, figure out the right person to tag with @

  @hwchase17 - project lead

  Tracing / Callbacks
  - @agola11

  Async
  - @agola11

  DataLoaders
  - @eyurtsev

  Models
  - @hwchase17
  - @agola11

  Agents / Tools / Toolkits
  - @vowelparrot

  VectorStores / Retrievers / Memory
  - @dev2049
        
 -->
@vowelparrot
@hwchase17
@dev2049
2023-06-03 14:53:14 -07:00
Leonid Ganeline
95c6ed0568 docs: modules pages simplified (#5116)
# docs: modules pages simplified

Fixied #5627  issue

Merged several repetitive sections in the `modules` pages. Some texts,
that were hard to understand, were also simplified.


## Who can review?

@hwchase17
@dev2049
2023-06-03 14:44:32 -07:00
Chandan Routray
bc875a9df1 Fixed multi input prompt for MapReduceChain (#4979)
# Fixed multi input prompt for MapReduceChain

Added `kwargs` support for inner chains of `MapReduceChain` via
`from_params` method
Currently the `from_method` method of intialising `MapReduceChain` chain
doesn't work if prompt has multiple inputs. It happens because it uses
`StuffDocumentsChain` and `MapReduceDocumentsChain` underneath, both of
them require specifying `document_variable_name` if `prompt` of their
`llm_chain` has more than one `input`.

With this PR, I have added support for passing their respective `kwargs`
via the `from_params` method.

## Fixes https://github.com/hwchase17/langchain/issues/4752

## Who can review? 
@dev2049 @hwchase17 @agola11

---------

Co-authored-by: imeckr <chandanroutray2012@gmail.com>
2023-06-03 14:41:03 -07:00
Matt Robinson
a97e4252e3 feat: add UnstructuredExcelLoader for .xlsx and .xls files (#5617)
# Unstructured Excel Loader

Adds an `UnstructuredExcelLoader` class for `.xlsx` and `.xls` files.
Works with `unstructured>=0.6.7`. A plain text representation of the
Excel file will be available under the `page_content` attribute in the
doc. If you use the loader in `"elements"` mode, an HTML representation
of the Excel file will be available under the `text_as_html` metadata
key. Each sheet in the Excel document is its own document.

### Testing

```python
from langchain.document_loaders import UnstructuredExcelLoader

loader = UnstructuredExcelLoader(
    "example_data/stanley-cups.xlsx",
    mode="elements"
)
docs = loader.load()
```

## Who can review?

@hwchase17
@eyurtsev
2023-06-03 12:44:12 -07:00
Leonid Ganeline
9a7488a5ce fix import issue (#5636)
# fix for the import issue

Added document loader classes from [`figma`, `iugu`, `onedrive_file`] to
`document_loaders/__inti__.py` imports
Also sorted `__all__`

Fixed #5623 issue
2023-06-02 14:58:41 -07:00
1389 changed files with 53316 additions and 43361 deletions

37
.devcontainer/README.md Normal file
View File

@@ -0,0 +1,37 @@
# Dev container
This project includes a [dev container](https://containers.dev/), which lets you use a container as a full-featured dev environment.
You can use the dev container configuration in this folder to build and run the app without needing to install any of its tools locally! You can use it in [GitHub Codespaces](https://github.com/features/codespaces) or the [VS Code Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers).
## GitHub Codespaces
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/hwchase17/langchain)
You may use the button above, or follow these steps to open this repo in a Codespace:
1. Click the **Code** drop-down menu at the top of https://github.com/hwchase17/langchain.
1. Click on the **Codespaces** tab.
1. Click **Create codespace on master** .
For more info, check out the [GitHub documentation](https://docs.github.com/en/free-pro-team@latest/github/developing-online-with-codespaces/creating-a-codespace#creating-a-codespace).
## VS Code Dev Containers
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/hwchase17/langchain)
If you already have VS Code and Docker installed, you can use the button above to get started. This will cause VS Code to automatically install the Dev Containers extension if needed, clone the source code into a container volume, and spin up a dev container for use.
You can also follow these steps to open this repo in a container using the VS Code Dev Containers extension:
1. If this is your first time using a development container, please ensure your system meets the pre-reqs (i.e. have Docker installed) in the [getting started steps](https://aka.ms/vscode-remote/containers/getting-started).
2. Open a locally cloned copy of the code:
- Clone this repository to your local filesystem.
- Press <kbd>F1</kbd> and select the **Dev Containers: Open Folder in Container...** command.
- Select the cloned copy of this folder, wait for the container to start, and try things out!
You can learn more in the [Dev Containers documentation](https://code.visualstudio.com/docs/devcontainers/containers).
## Tips and tricks
* If you are working with the same repository folder in a container and Windows, you'll want consistent line endings (otherwise you may see hundreds of changes in the SCM view). The `.gitattributes` file in the root of this repo will disable line ending conversion and should prevent this. See [tips and tricks](https://code.visualstudio.com/docs/devcontainers/tips-and-tricks#_resolving-git-line-ending-issues-in-containers-resulting-in-many-modified-files) for more info.
* If you'd like to review the contents of the image used in this dev container, you can check it out in the [devcontainers/images](https://github.com/devcontainers/images/tree/main/src/python) repo.

View File

@@ -1,24 +1,26 @@
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-existing-dockerfile
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-existing-docker-compose
{
"dockerComposeFile": "./docker-compose.yaml",
"service": "langchain",
"workspaceFolder": "/workspaces/langchain",
// Name for the dev container
"name": "langchain",
"customizations": {
"vscode": {
"extensions": [
"ms-python.python"
],
"settings": {
"python.defaultInterpreterPath": "/home/vscode/langchain-py-env/bin/python3.11"
}
}
},
// Features to add to the dev container. More info: https://containers.dev/features.
"features": {},
// Point to a Docker Compose file
"dockerComposeFile": "./docker-compose.yaml",
// Required when using Docker Compose. The name of the service to connect to once running
"service": "langchain",
// The optional 'workspaceFolder' property is the path VS Code should open by default when
// connected. This is typically a file mount in .devcontainer/docker-compose.yml
"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
// Prevent the container from shutting down
"overrideCommand": true
// Features to add to the dev container. More info: https://containers.dev/features
// "features": {
// "ghcr.io/devcontainers-contrib/features/poetry:2": {}
// }
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
@@ -26,8 +28,9 @@
// Uncomment the next line to run commands after the container is created.
// "postCreateCommand": "cat /etc/os-release",
// Uncomment to connect as an existing user other than the container default. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "devcontainer"
"remoteUser": "vscode",
"overrideCommand": true
// Configure tool-specific properties.
// "customizations": {},
// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
// "remoteUser": "root"
}

View File

@@ -2,10 +2,11 @@ version: '3'
services:
langchain:
build:
dockerfile: .devcontainer/Dockerfile
context: ../
dockerfile: dev.Dockerfile
context: ..
volumes:
- ../:/workspaces/langchain
# Update this to wherever you want VS Code to mount the folder of your project
- ..:/workspaces:cached
networks:
- langchain-network
# environment:

3
.gitattributes vendored Normal file
View File

@@ -0,0 +1,3 @@
* text=auto eol=lf
*.{cmd,[cC][mM][dD]} text eol=crlf
*.{bat,[bB][aA][tT]} text eol=crlf

View File

@@ -59,6 +59,8 @@ we do not want these to get in the way of getting good code into the codebase.
## 🚀 Quick Start
> **Note:** You can run this repository locally (which is described below) or in a [development container](https://containers.dev/) (which is described in the [.devcontainer folder](https://github.com/hwchase17/langchain/tree/master/.devcontainer)).
This project uses [Poetry](https://python-poetry.org/) as a dependency manager. Check out Poetry's [documentation on how to install it](https://python-poetry.org/docs/#installation) on your system before proceeding.
❗Note: If you use `Conda` or `Pyenv` as your environment / package manager, avoid dependency conflicts by doing the following first:

View File

@@ -46,7 +46,7 @@ body:
- @agola11
Tools / Toolkits
- @vowelparrot
- ...
placeholder: "@Username ..."

View File

@@ -48,7 +48,7 @@ Tag maintainers/contributors who might be interested:
- @agola11
Agents / Tools / Toolkits
- @vowelparrot
- @hwchase17
VectorStores / Retrievers / Memory
- @dev2049

View File

@@ -1,38 +0,0 @@
name: linkcheck
on:
push:
branches: [master]
pull_request:
paths:
- 'docs/**'
env:
POETRY_VERSION: "1.4.2"
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version:
- "3.11"
steps:
- uses: actions/checkout@v3
- name: Install poetry
run: |
pipx install poetry==$POETRY_VERSION
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
cache: poetry
- name: Install dependencies
run: |
poetry install --with docs
- name: Build the docs
run: |
make docs_build
- name: Analyzing the docs with linkcheck
run: |
make docs_linkcheck

14
.gitignore vendored
View File

@@ -73,6 +73,7 @@ instance/
# Sphinx documentation
docs/_build/
docs/docs/_build/
# PyBuilder
target/
@@ -152,4 +153,15 @@ data_map*
\[('_type', 'fake'), ('stop', None)]
# Replit files
*replit*
*replit*
node_modules
docs/.yarn/
docs/node_modules/
docs/.docusaurus/
docs/.cache-loader/
docs/_dist
docs/api_reference/_build
docs/docs_skeleton/build
docs/docs_skeleton/node_modules
docs/docs_skeleton/yarn.lock

4
.gitmodules vendored Normal file
View File

@@ -0,0 +1,4 @@
[submodule "docs/_docs_skeleton"]
path = docs/_docs_skeleton
url = https://github.com/langchain-ai/langchain-shared-docs
branch = main

View File

@@ -12,7 +12,7 @@ build:
# Build documentation in the docs/ directory with Sphinx
sphinx:
configuration: docs/conf.py
configuration: docs/api_reference/conf.py
# If using Sphinx, optionally build your docs in additional formats such as PDF
# formats:
@@ -23,4 +23,4 @@ python:
install:
- requirements: docs/requirements.txt
- method: pip
path: .
path: .

View File

@@ -10,6 +10,9 @@ coverage:
clean: docs_clean
docs_compile:
poetry run nbdoc_build --srcdir $(srcdir)
docs_build:
cd docs && poetry run make html

View File

@@ -2,9 +2,9 @@
⚡ Building applications with LLMs through composability ⚡
[![Release Notes](https://img.shields.io/github/release/hwchase17/langchain)](https://github.com/hwchase17/langchain/releases)
[![lint](https://github.com/hwchase17/langchain/actions/workflows/lint.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/lint.yml)
[![test](https://github.com/hwchase17/langchain/actions/workflows/test.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/test.yml)
[![linkcheck](https://github.com/hwchase17/langchain/actions/workflows/linkcheck.yml/badge.svg)](https://github.com/hwchase17/langchain/actions/workflows/linkcheck.yml)
[![Downloads](https://static.pepy.tech/badge/langchain/month)](https://pepy.tech/project/langchain)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Twitter](https://img.shields.io/twitter/url/https/twitter.com/langchainai.svg?style=social&label=Follow%20%40LangChainAI)](https://twitter.com/langchainai)
@@ -12,6 +12,8 @@
[![Open in Dev Containers](https://img.shields.io/static/v1?label=Dev%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/hwchase17/langchain)
[![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://codespaces.new/hwchase17/langchain)
[![GitHub star chart](https://img.shields.io/github/stars/hwchase17/langchain?style=social)](https://star-history.com/#hwchase17/langchain)
[![Dependency Status](https://img.shields.io/librariesio/github/hwchase17/langchain)](https://libraries.io/github/hwchase17/langchain)
[![Open Issues](https://img.shields.io/github/issues-raw/hwchase17/langchain)](https://github.com/hwchase17/langchain/issues)
Looking for the JS/TS version? Check out [LangChain.js](https://github.com/hwchase17/langchainjs).
@@ -33,22 +35,22 @@ This library aims to assist in the development of those types of applications. C
**❓ Question Answering over specific documents**
- [Documentation](https://langchain.readthedocs.io/en/latest/use_cases/question_answering.html)
- [Documentation](https://python.langchain.com/docs/use_cases/question_answering/)
- End-to-end Example: [Question Answering over Notion Database](https://github.com/hwchase17/notion-qa)
**💬 Chatbots**
- [Documentation](https://langchain.readthedocs.io/en/latest/use_cases/chatbots.html)
- [Documentation](https://python.langchain.com/docs/use_cases/chatbots/)
- End-to-end Example: [Chat-LangChain](https://github.com/hwchase17/chat-langchain)
**🤖 Agents**
- [Documentation](https://langchain.readthedocs.io/en/latest/modules/agents.html)
- [Documentation](https://python.langchain.com/docs/modules/agents/)
- End-to-end Example: [GPT+WolframAlpha](https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain)
## 📖 Documentation
Please see [here](https://langchain.readthedocs.io/en/latest/?) for full documentation on:
Please see [here](https://python.langchain.com) for full documentation on:
- Getting started (installation, setting up the environment, simple examples)
- How-To examples (demos, integrations, helper functions)
@@ -84,7 +86,7 @@ Memory refers to persisting state between calls of a chain/agent. LangChain prov
[BETA] Generative models are notoriously hard to evaluate with traditional metrics. One new way of evaluating them is using language models themselves to do the evaluation. LangChain provides some prompts/chains for assisting in this.
For more information on these concepts, please see our [full documentation](https://langchain.readthedocs.io/en/latest/).
For more information on these concepts, please see our [full documentation](https://python.langchain.com).
## 💁 Contributing

View File

@@ -1,15 +1,15 @@
# This is a Dockerfile for Developer Container
# This is a Dockerfile for the Development Container
# Use the Python base image
ARG VARIANT="3.11-bullseye"
FROM mcr.microsoft.com/vscode/devcontainers/python:0-${VARIANT} AS langchain-dev-base
FROM mcr.microsoft.com/devcontainers/python:0-${VARIANT} AS langchain-dev-base
USER vscode
# Define the version of Poetry to install (default is 1.4.2)
# Define the directory of python virtual environment
ARG PYTHON_VIRTUALENV_HOME=/home/vscode/langchain-py-env \
POETRY_VERSION=1.4.2
POETRY_VERSION=1.3.2
ENV POETRY_VIRTUALENVS_IN_PROJECT=false \
POETRY_NO_INTERACTION=true
@@ -35,8 +35,7 @@ FROM langchain-dev-base AS langchain-dev-dependencies
ARG PYTHON_VIRTUALENV_HOME
# Copy only the dependency files for installation
COPY pyproject.toml poetry.lock poetry.toml ./
COPY pyproject.toml poetry.toml ./
# Install the Poetry dependencies (this layer will be cached as long as the dependencies don't change)
RUN poetry install --no-interaction --no-ansi --with dev,test,docs
RUN poetry install --no-interaction --no-ansi --with dev,test,docs

12
docs/.local_build.sh Executable file
View File

@@ -0,0 +1,12 @@
mkdir _dist
cp -r {docs_skeleton,snippets} _dist
mkdir -p _dist/docs_skeleton/static/api_reference
cd api_reference
poetry run make html
cp -r _build/* ../_dist/docs_skeleton/static/api_reference
cd ..
cp -r extras/* _dist/docs_skeleton/docs
cd _dist/docs_skeleton
poetry run nbdoc_build
yarn install
yarn start

View File

@@ -0,0 +1,57 @@
document.addEventListener('DOMContentLoaded', () => {
// Load the external dependencies
function loadScript(src, onLoadCallback) {
const script = document.createElement('script');
script.src = src;
script.onload = onLoadCallback;
document.head.appendChild(script);
}
function createRootElement() {
const rootElement = document.createElement('div');
rootElement.id = 'my-component-root';
document.body.appendChild(rootElement);
return rootElement;
}
function initializeMendable() {
const rootElement = createRootElement();
const { MendableFloatingButton } = Mendable;
const iconSpan1 = React.createElement('span', {
}, '🦜');
const iconSpan2 = React.createElement('span', {
}, '🔗');
const icon = React.createElement('p', {
style: { color: '#ffffff', fontSize: '22px',width: '48px', height: '48px', margin: '0px', padding: '0px', display: 'flex', alignItems: 'center', justifyContent: 'center', textAlign: 'center' },
}, [iconSpan1, iconSpan2]);
const mendableFloatingButton = React.createElement(
MendableFloatingButton,
{
style: { darkMode: false, accentColor: '#010810' },
floatingButtonStyle: { color: '#ffffff', backgroundColor: '#010810' },
anon_key: '82842b36-3ea6-49b2-9fb8-52cfc4bde6bf', // Mendable Search Public ANON key, ok to be public
cmdShortcutKey:'j',
messageSettings: {
openSourcesInNewTab: false,
prettySources: true // Prettify the sources displayed now
},
icon: icon,
}
);
ReactDOM.render(mendableFloatingButton, rootElement);
}
loadScript('https://unpkg.com/react@17/umd/react.production.min.js', () => {
loadScript('https://unpkg.com/react-dom@17/umd/react-dom.production.min.js', () => {
loadScript('https://unpkg.com/@mendable/search@0.0.102/dist/umd/mendable.min.js', initializeMendable);
});
});
});

View File

@@ -17,7 +17,7 @@
import toml
with open("../pyproject.toml") as f:
with open("../../pyproject.toml") as f:
data = toml.load(f)
# -- Project information -----------------------------------------------------
@@ -49,19 +49,31 @@ extensions = [
"sphinx_copybutton",
"sphinx_panels",
"IPython.sphinxext.ipython_console_highlighting",
"sphinx_tabs.tabs",
]
source_suffix = [".ipynb", ".html", ".md", ".rst"]
source_suffix = [".rst"]
autodoc_pydantic_model_show_json = False
autodoc_pydantic_field_list_validators = False
autodoc_pydantic_config_members = False
autodoc_pydantic_model_show_config_summary = False
autodoc_pydantic_model_show_validator_members = False
autodoc_pydantic_model_show_validator_summary = False
autodoc_pydantic_model_show_field_summary = False
autodoc_pydantic_model_members = False
autodoc_pydantic_model_undoc_members = False
# autodoc_typehints = "signature"
# autodoc_typehints = "description"
autodoc_pydantic_model_hide_paramlist = False
autodoc_pydantic_model_signature_prefix = "class"
autodoc_pydantic_field_signature_prefix = "attribute"
autodoc_pydantic_model_summary_list_order = "bysource"
autodoc_member_order = "bysource"
autodoc_default_options = {
"members": True,
"show-inheritance": True,
"undoc_members": True,
"inherited_members": "BaseModel",
}
autodoc_typehints = "description"
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
@@ -77,12 +89,13 @@ exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = "sphinx_book_theme"
html_theme = "sphinx_rtd_theme"
html_theme_options = {
"path_to_docs": "docs",
"repository_url": "https://github.com/hwchase17/langchain",
"use_repository_button": True,
# "style_nav_header_background": "white"
}
html_context = {
@@ -90,7 +103,7 @@ html_context = {
"github_user": "hwchase17", # Username
"github_repo": "langchain", # Repo name
"github_version": "master", # Version
"conf_py_path": "/docs/", # Path in the checkout to the docs root
"conf_py_path": "/docs/api_reference", # Path in the checkout to the docs root
}
# Add any paths that contain custom static files (such as style sheets) here,

View File

@@ -1,16 +1,13 @@
Indexes
Data connection
==============
Indexes refer to ways to structure documents so that LLMs can best interact with them.
LangChain has a number of modules that help you load, structure, store, and retrieve documents.
.. toctree::
:maxdepth: 1
:glob:
modules/docstore
modules/text_splitter
modules/document_loaders
modules/document_transformers
modules/embeddings
modules/vectorstores
modules/retrievers
modules/document_compressors
modules/document_transformers

View File

@@ -0,0 +1,29 @@
API Reference
==========================
| Full documentation on all methods, classes, and APIs in the LangChain Python package.
.. toctree::
:maxdepth: 1
:caption: Abstractions
./modules/base_classes.rst
.. toctree::
:maxdepth: 1
:caption: Core
./model_io.rst
./data_connection.rst
./modules/chains.rst
./agents.rst
./modules/memory.rst
./modules/callbacks.rst
.. toctree::
:maxdepth: 1
:caption: Additional
./modules/utilities.rst
./modules/experimental.rst

View File

@@ -0,0 +1,12 @@
Model I/O
==============
LangChain provides interfaces and integrations for working with language models.
.. toctree::
:maxdepth: 1
:glob:
./prompts.rst
./models.rst
./modules/output_parsers.rst

View File

@@ -9,4 +9,3 @@ LangChain provides interfaces and integrations for a number of different types o
modules/llms
modules/chat_models
modules/embeddings

View File

@@ -0,0 +1,5 @@
Base classes
========================
.. automodule:: langchain.schema
:inherited-members:

View File

@@ -0,0 +1,7 @@
Callbacks
=======================
.. automodule:: langchain.callbacks
:members:
:undoc-members:

View File

@@ -4,4 +4,5 @@ Chains
.. automodule:: langchain.chains
:members:
:undoc-members:
:inherited-members: BaseModel

View File

@@ -5,3 +5,9 @@ Document Transformers
:members:
:undoc-members:
Text Splitters
------------------------------
.. automodule:: langchain.text_splitter
:members:
:undoc-members:

View File

@@ -1,10 +1,10 @@
====================
Experimental Modules
Experimental
====================
This module contains experimental modules and reproductions of existing work using LangChain primitives.
Autonomous Agents
Autonomous agents
------------------
Here, we document the BabyAGI and AutoGPT classes from the langchain.experimental module.
@@ -16,7 +16,7 @@ Here, we document the BabyAGI and AutoGPT classes from the langchain.experimenta
:members:
Generative Agents
Generative agents
------------------
Here, we document the GenerativeAgent and GenerativeAgentMemory classes from the langchain.experimental module.

View File

@@ -1,5 +1,6 @@
PromptTemplates
Prompt Templates
========================
.. automodule:: langchain.prompts
:members:
:undoc-members:

View File

@@ -0,0 +1,14 @@
Retrievers
===============================
.. automodule:: langchain.retrievers
:members:
:undoc-members:
Document compressors
-------------------------------
.. automodule:: langchain.retrievers.document_compressors
:members:
:undoc-members:

View File

@@ -9,4 +9,3 @@ The reference guides here all relate to objects for working with Prompts.
modules/prompts
modules/example_selector
modules/output_parsers

7
docs/docs_skeleton/.gitignore vendored Normal file
View File

@@ -0,0 +1,7 @@
.yarn/
node_modules/
.docusaurus
.cache-loader
docs/api

View File

@@ -0,0 +1,49 @@
# Website
This website is built using [Docusaurus 2](https://docusaurus.io/), a modern static website generator.
### Installation
```
$ yarn
```
### Local Development
```
$ yarn start
```
This command starts a local development server and opens up a browser window. Most changes are reflected live without having to restart the server.
### Build
```
$ yarn build
```
This command generates static content into the `build` directory and can be served using any static contents hosting service.
### Deployment
Using SSH:
```
$ USE_SSH=true yarn deploy
```
Not using SSH:
```
$ GIT_USER=<Your GitHub username> yarn deploy
```
If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch.
### Continuous Integration
Some common defaults for linting/formatting have been set for you. If you integrate your project with an open source Continuous Integration system (e.g. Travis CI, CircleCI), you may check for issues using the following command.
```
$ yarn ci
```

View File

@@ -0,0 +1,12 @@
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*
* @format
*/
module.exports = {
presets: [require.resolve("@docusaurus/core/lib/babel/preset")],
};

View File

@@ -0,0 +1,76 @@
/* eslint-disable prefer-template */
/* eslint-disable no-param-reassign */
// eslint-disable-next-line import/no-extraneous-dependencies
const babel = require("@babel/core");
const path = require("path");
const fs = require("fs");
/**
*
* @param {string|Buffer} content Content of the resource file
* @param {object} [map] SourceMap data consumable by https://github.com/mozilla/source-map
* @param {any} [meta] Meta data, could be anything
*/
async function webpackLoader(content, map, meta) {
const cb = this.async();
if (!this.resourcePath.endsWith(".ts")) {
cb(null, JSON.stringify({ content, imports: [] }), map, meta);
return;
}
try {
const result = await babel.parseAsync(content, {
sourceType: "module",
filename: this.resourcePath,
});
const imports = [];
result.program.body.forEach((node) => {
if (node.type === "ImportDeclaration") {
const source = node.source.value;
if (!source.startsWith("langchain")) {
return;
}
node.specifiers.forEach((specifier) => {
if (specifier.type === "ImportSpecifier") {
const local = specifier.local.name;
const imported = specifier.imported.name;
imports.push({ local, imported, source });
} else {
throw new Error("Unsupported import type");
}
});
}
});
imports.forEach((imp) => {
const { imported, source } = imp;
const moduleName = source.split("/").slice(1).join("_");
const docsPath = path.resolve(__dirname, "docs", "api", moduleName);
const available = fs.readdirSync(docsPath, { withFileTypes: true });
const found = available.find(
(dirent) =>
dirent.isDirectory() &&
fs.existsSync(path.resolve(docsPath, dirent.name, imported + ".md"))
);
if (found) {
imp.docs =
"/" + path.join("docs", "api", moduleName, found.name, imported);
} else {
throw new Error(
`Could not find docs for ${source}.${imported} in docs/api/`
);
}
});
cb(null, JSON.stringify({ content, imports }), map, meta);
} catch (err) {
cb(err);
}
}
module.exports = webpackLoader;

View File

Before

Width:  |  Height:  |  Size: 559 KiB

After

Width:  |  Height:  |  Size: 559 KiB

View File

Before

Width:  |  Height:  |  Size: 157 KiB

After

Width:  |  Height:  |  Size: 157 KiB

View File

Before

Width:  |  Height:  |  Size: 235 KiB

After

Width:  |  Height:  |  Size: 235 KiB

View File

Before

Width:  |  Height:  |  Size: 148 KiB

After

Width:  |  Height:  |  Size: 148 KiB

View File

Before

Width:  |  Height:  |  Size: 3.5 MiB

After

Width:  |  Height:  |  Size: 3.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

View File

@@ -0,0 +1,21 @@
pre {
white-space: break-spaces;
}
@media (min-width: 1200px) {
.container,
.container-lg,
.container-md,
.container-sm,
.container-xl {
max-width: 2560px !important;
}
}
#my-component-root *, #headlessui-portal-root * {
z-index: 10000;
}
.content-container p {
margin: revert;
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 542 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 103 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

View File

@@ -0,0 +1,8 @@
---
sidebar_position: 0
---
# Integrations
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -0,0 +1,5 @@
# Installation
import Installation from "@snippets/get_started/installation.mdx"
<Installation/>

View File

@@ -0,0 +1,65 @@
---
sidebar_position: 0
---
# Introduction
**LangChain** is a framework for developing applications powered by language models. It enables applications that are:
- **Data-aware**: connect a language model to other sources of data
- **Agentic**: allow a language model to interact with its environment
The main value props of LangChain are:
1. **Components**: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
2. **Off-the-shelf chains**: a structured assembly of components for accomplishing specific higher-level tasks
Off-the-shelf chains make it easy to get started. For more complex applications and nuanced use-cases, components make it easy to customize existing chains or build new ones.
## Get started
[Heres](/docs/get_started/installation.html) how to install LangChain, set up your environment, and start building.
We recommend following our [Quickstart](/docs/get_started/quickstart.html) guide to familiarize yourself with the framework by building your first LangChain application.
_**Note**: These docs are for the LangChain [Python package](https://github.com/hwchase17/langchain). For documentation on [LangChain.js](https://github.com/hwchase17/langchainjs), the JS/TS version, [head here](https://js.langchain.com/docs)._
## Modules
LangChain provides standard, extendable interfaces and external integrations for the following modules, listed from least to most complex:
#### [Model I/O](/docs/modules/model_io/)
Interface with language models
#### [Data connection](/docs/modules/data_connection/)
Interface with application-specific data
#### [Chains](/docs/modules/chains/)
Construct sequences of calls
#### [Agents](/docs/modules/agents/)
Let chains choose which tools to use given high-level directives
#### [Memory](/docs/modules/memory/)
Persist application state between runs of a chain
#### [Callbacks](/docs/modules/callbacks/)
Log and stream intermediate steps of any chain
## Examples, ecosystem, and resources
### [Use cases](/docs/use_cases/)
Walkthroughs and best-practices for common end-to-end use cases, like:
- [Chatbots](/docs/use_cases/chatbots/)
- [Answering questions using sources](/docs/use_cases/question_answering/)
- [Analyzing structured data](/docs/use_cases/tabular.html)
- and much more...
### [Guides](/docs/guides/)
Learn best practices for developing with LangChain.
### [Ecosystem](/docs/ecosystem/)
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of [integrations](/docs/ecosystem/integrations/) and [dependent repos](/docs/ecosystem/dependents.html).
### [Additional resources](/docs/additional_resources/)
Our community is full of prolific developers, creative builders, and fantastic teachers. Check out [YouTube tutorials](/docs/ecosystem/youtube.html) for great tutorials from folks in the community, and [Gallery](https://github.com/kyrolabs/awesome-langchain) for a list of awesome LangChain projects, compiled by the folks at [KyroLabs](https://kyrolabs.com).
<h3><span style={{color:"#2e8555"}}> Support </span></h3>
Join us on [GitHub](https://github.com/hwchase17/langchain) or [Discord](https://discord.gg/6adMQxSpJS) to ask questions, share feedback, meet other developers building with LangChain, and dream about the future of LLMs.
## API reference
Head to the [reference](https://api.python.langchain.com) section for full documentation of all classes and methods in the LangChain Python package.

View File

@@ -0,0 +1,158 @@
# Quickstart
## Installation
To install LangChain run:
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Install from "@snippets/get_started/quickstart/installation.mdx"
<Install/>
For more details, see our [Installation guide](/docs/get_started/installation.html).
## Environment setup
Using LangChain will usually require integrations with one or more model providers, data stores, APIs, etc. For this example, we'll use OpenAI's model APIs.
import OpenAISetup from "@snippets/get_started/quickstart/openai_setup.mdx"
<OpenAISetup/>
## Building an application
Now we can start building our language model application. LangChain provides many modules that can be used to build language model applications. Modules can be used as stand-alones in simple applications and they can be combined for more complex use cases.
## LLMs
#### Get predictions from a language model
The basic building block of LangChain is the LLM, which takes in text and generates more text.
As an example, suppose we're building an application that generates a company name based on a company description. In order to do this, we need to initialize an OpenAI model wrapper. In this case, since we want the outputs to be MORE random, we'll initialize our model with a HIGH temperature.
import LLM from "@snippets/get_started/quickstart/llm.mdx"
<LLM/>
## Chat models
Chat models are a variation on language models. While chat models use language models under the hood, the interface they expose is a bit different: rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs.
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are `AIMessage`, `HumanMessage`, `SystemMessage`, and `ChatMessage` -- `ChatMessage` takes in an arbitrary role parameter. Most of the time, you'll just be dealing with `HumanMessage`, `AIMessage`, and `SystemMessage`.
import ChatModel from "@snippets/get_started/quickstart/chat_model.mdx"
<ChatModel/>
## Prompt templates
Most LLM applications do not pass user input directly into to an LLM. Usually they will add the user input to a larger piece of text, called a prompt template, that provides additional context on the specific task at hand.
In the previous example, the text we passed to the model contained instructions to generate a company name. For our application, it'd be great if the user only had to provide the description of a company/product, without having to worry about giving the model instructions.
import PromptTemplateLLM from "@snippets/get_started/quickstart/prompt_templates_llms.mdx"
import PromptTemplateChatModel from "@snippets/get_started/quickstart/prompt_templates_chat_models.mdx"
<Tabs>
<TabItem value="llms" label="LLMs" default>
With PromptTemplates this is easy! In this case our template would be very simple:
<PromptTemplateLLM/>
</TabItem>
<TabItem value="chat_models" label="Chat models">
Similar to LLMs, you can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplate`s. You can use `ChatPromptTemplate`'s `format_messages` method to generate the formatted messages.
Because this is generating a list of messages, it is slightly more complex than the normal prompt template which is generating only a string. Please see the detailed guides on prompts to understand more options available to you here.
<PromptTemplateChatModel/>
</TabItem>
</Tabs>
## Chains
Now that we've got a model and a prompt template, we'll want to combine the two. Chains give us a way to link (or chain) together multiple primitives, like models, prompts, and other chains.
import ChainLLM from "@snippets/get_started/quickstart/chains_llms.mdx"
import ChainChatModel from "@snippets/get_started/quickstart/chains_chat_models.mdx"
<Tabs>
<TabItem value="llms" label="LLMs" default>
The simplest and most common type of chain is an LLMChain, which passes an input first to a PromptTemplate and then to an LLM. We can construct an LLM chain from our existing model and prompt template.
<ChainLLM/>
There we go, our first chain! Understanding how this simple chain works will set you up well for working with more complex chains.
</TabItem>
<TabItem value="chat_models" label="Chat models">
The `LLMChain` can be used with chat models as well:
<ChainChatModel/>
</TabItem>
</Tabs>
## Agents
import AgentLLM from "@snippets/get_started/quickstart/agents_llms.mdx"
import AgentChatModel from "@snippets/get_started/quickstart/agents_chat_models.mdx"
Our first chain ran a pre-determined sequence of steps. To handle complex workflows, we need to be able to dynamically choose actions based on inputs.
Agents do just this: they use a language model to determine which actions to take and in what order. Agents are given access to tools, and they repeatedly choose a tool, run the tool, and observe the output until they come up with a final answer.
To load an agent, you need to choose a(n):
- LLM/Chat model: The language model powering the agent.
- Tool(s): A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. For a list of predefined tools and their specifications, see the [Tools documentation](/docs/modules/agents/tools/).
- Agent name: A string that references a supported agent class. An agent class is largely parameterized by the prompt the language model uses to determine which action to take. Because this notebook focuses on the simplest, highest level API, this only covers using the standard supported agents. If you want to implement a custom agent, see [here](/docs/modules/agents/how_to/custom_agent.html). For a list of supported agents and their specifications, see [here](/docs/modules/agents/agent_types/).
For this example, we'll be using SerpAPI to query a search engine.
You'll need to install the SerpAPI Python package:
```bash
pip install google-search-results
```
And set the `SERPAPI_API_KEY` environment variable.
<Tabs>
<TabItem value="llms" label="LLMs" default>
<AgentLLM/>
</TabItem>
<TabItem value="chat_models" label="Chat models">
Agents can also be used with chat models, you can initialize one using `AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION` as the agent type.
<AgentChatModel/>
</TabItem>
</Tabs>
## Memory
The chains and agents we've looked at so far have been stateless, but for many applications it's necessary to reference past interactions. This is clearly the case with a chatbot for example, where you want it to understand new messages in the context of past messages.
The Memory module gives you a way to maintain application state. The base Memory interface is simple: it lets you update state given the latest run inputs and outputs and it lets you modify (or contextualize) the next input using the stored state.
There are a number of built-in memory systems. The simplest of these are is a buffer memory which just prepends the last few inputs/outputs to the current input - we will use this in the example below.
import MemoryLLM from "@snippets/get_started/quickstart/memory_llms.mdx"
import MemoryChatModel from "@snippets/get_started/quickstart/memory_chat_models.mdx"
<Tabs>
<TabItem value="llms" label="LLMs" default>
<MemoryLLM/>
</TabItem>
<TabItem value="chat_models" label="Chat models">
You can use Memory with chains and agents initialized with chat models. The main difference between this and Memory for LLMs is that rather than trying to condense all previous messages into a string, we can keep them as their own unique memory object.
<MemoryChatModel/>
</TabItem>
</Tabs>

View File

@@ -0,0 +1,13 @@
# Conversational
This walkthrough demonstrates how to use an agent optimized for conversation. Other agents are often optimized for using tools to figure out the best response, which is not ideal in a conversational setting where you may want the agent to be able to chat with the user as well.
import Example from "@snippets/modules/agents/agent_types/conversational_agent.mdx"
<Example/>
import ChatExample from "@snippets/modules/agents/agent_types/chat_conversation_agent.mdx"
## Using a chat model
<ChatExample/>

View File

@@ -0,0 +1,57 @@
---
sidebar_position: 0
---
# Agent types
## Action agents
Agents use an LLM to determine which actions to take and in what order.
An action can either be using a tool and observing its output, or returning a response to the user.
Here are the agents available in LangChain.
### [Zero-shot ReAct](/docs/modules/agents/agent_types/react.html)
This agent uses the [ReAct](https://arxiv.org/pdf/2205.00445.pdf) framework to determine which tool to use
based solely on the tool's description. Any number of tools can be provided.
This agent requires that a description is provided for each tool.
**Note**: This is the most general purpose action agent.
### [Structured input ReAct](/docs/modules/agents/agent_types/structured_chat.html)
The structured tool chat agent is capable of using multi-input tools.
Older agents are configured to specify an action input as a single string, but this agent can use a tools' argument
schema to create a structured action input. This is useful for more complex tool usage, like precisely
navigating around a browser.
### [OpenAI Functions](/docs/modules/agents/agent_types/openai_functions_agent.html)
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been explicitly fine-tuned to detect when a
function should to be called and respond with the inputs that should be passed to the function.
The OpenAI Functions Agent is designed to work with these models.
### [Conversational](/docs/modules/agents/agent_types/chat_conversation_agent.html)
This agent is designed to be used in conversational settings.
The prompt is designed to make the agent helpful and conversational.
It uses the ReAct framework to decide which tool to use, and uses memory to remember the previous conversation interactions.
### [Self ask with search](/docs/modules/agents/agent_types/self_ask_with_search.html)
This agent utilizes a single tool that should be named `Intermediate Answer`.
This tool should be able to lookup factual answers to questions. This agent
is equivalent to the original [self ask with search paper](https://ofir.io/self-ask.pdf),
where a Google search API was provided as the tool.
### [ReAct document store](/docs/modules/agents/agent_types/react_docstore.html)
This agent uses the ReAct framework to interact with a docstore. Two tools must
be provided: a `Search` tool and a `Lookup` tool (they must be named exactly as so).
The `Search` tool should search for a document, while the `Lookup` tool should lookup
a term in the most recently found document.
This agent is equivalent to the
original [ReAct paper](https://arxiv.org/pdf/2210.03629.pdf), specifically the Wikipedia example.
## [Plan-and-execute agents](/docs/modules/agents/agent_types/plan_and_execute.html)
Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).

View File

@@ -0,0 +1,11 @@
# OpenAI functions
Certain OpenAI models (like gpt-3.5-turbo-0613 and gpt-4-0613) have been fine-tuned to detect when a function should to be called and respond with the inputs that should be passed to the function.
In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call those functions.
The goal of the OpenAI Function APIs is to more reliably return valid and useful function calls than a generic text completion or chat API.
The OpenAI Functions Agent is designed to work with these models.
import Example from "@snippets/modules/agents/agent_types/openai_functions_agent.mdx";
<Example/>

View File

@@ -0,0 +1,11 @@
# Plan and execute
Plan and execute agents accomplish an objective by first planning what to do, then executing the sub tasks. This idea is largely inspired by [BabyAGI](https://github.com/yoheinakajima/babyagi) and then the ["Plan-and-Solve" paper](https://arxiv.org/abs/2305.04091).
The planning is almost always done by an LLM.
The execution is usually done by a separate agent (equipped with tools).
import Example from "@snippets/modules/agents/agent_types/plan_and_execute.mdx"
<Example/>

View File

@@ -0,0 +1,15 @@
# ReAct
This walkthrough showcases using an agent to implement the [ReAct](https://react-lm.github.io/) logic.
import Example from "@snippets/modules/agents/agent_types/react.mdx"
<Example/>
## Using chat models
You can also create ReAct agents that use chat models instead of LLMs as the agent driver.
import ChatExample from "@snippets/modules/agents/agent_types/react_chat.mdx"
<ChatExample/>

View File

@@ -0,0 +1,10 @@
# Structured tool chat
The structured tool chat agent is capable of using multi-input tools.
Older agents are configured to specify an action input as a single string, but this agent can use the provided tools' `args_schema` to populate the action input.
import Example from "@snippets/modules/agents/agent_types/structured_chat.mdx"
<Example/>

View File

@@ -0,0 +1,2 @@
label: 'How-to'
position: 1

View File

@@ -0,0 +1,14 @@
# Custom LLM Agent
This notebook goes through how to create your own custom LLM agent.
An LLM agent consists of three parts:
- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do
- LLM: This is the language model that powers the agent
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found
- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object
import Example from "@snippets/modules/agents/how_to/custom_llm_agent.mdx"
<Example/>

View File

@@ -0,0 +1,14 @@
# Custom LLM Agent (with a ChatModel)
This notebook goes through how to create your own custom agent based on a chat model.
An LLM chat agent consists of three parts:
- PromptTemplate: This is the prompt template that can be used to instruct the language model on what to do
- ChatModel: This is the language model that powers the agent
- `stop` sequence: Instructs the LLM to stop generating as soon as this string is found
- OutputParser: This determines how to parse the LLMOutput into an AgentAction or AgentFinish object
import Example from "@snippets/modules/agents/how_to/custom_llm_chat_agent.mdx"
<Example/>

View File

@@ -0,0 +1,16 @@
# Replicating MRKL
This walkthrough demonstrates how to replicate the [MRKL](https://arxiv.org/pdf/2205.00445.pdf) system using agents.
This uses the example Chinook database.
To set it up follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the `.db` file in a notebooks folder at the root of this repository.
import Example from "@snippets/modules/agents/how_to/mrkl.mdx"
<Example/>
## With a chat model
import ChatExample from "@snippets/modules/agents/how_to/mrkl_chat.mdx"
<ChatExample/>

View File

@@ -0,0 +1,51 @@
---
sidebar_position: 4
---
# Agents
Some applications require a flexible chain of calls to LLMs and other tools based on user input. The **Agent** interface provides the flexibility for such applications. An agent has access to a suite of tools, and determines which ones to use depending on the user input. Agents can use multiple tools, and use the output of one tool as the input to the next.
There are two main types of agents:
- **Action agents**: at each timestep, decide on the next action using the outputs of all previous actions
- **Plan-and-execute agents**: decide on the full sequence of actions up front, then execute them all without updating the plan
Action agents are suitable for small tasks, while plan-and-execute agents are better for complex or long-running tasks that require maintaining long-term objectives and focus. Often the best approach is to combine the dynamism of an action agent with the planning abilities of a plan-and-execute agent by letting the plan-and-execute agent use action agents to execute plans.
For a full list of agent types see [agent types](/docs/modules/agents/agent_types/). Additional abstractions involved in agents are:
- [**Tools**](/docs/modules/agents/tools/): the actions an agent can take. What tools you give an agent highly depend on what you want the agent to do
- [**Toolkits**](/docs/modules/agents/toolkits/): wrappers around collections of tools that can be used together a specific use case. For example, in order for an agent to
interact with a SQL database it will likely need one tool to execute queries and another to inspect tables
## Action agents
At a high-level an action agent:
1. Receives user input
2. Decides which tool, if any, to use and the tool input
3. Calls the tool and records the output (also known as an "observation")
4. Decides the next step using the history of tools, tool inputs, and observations
5. Repeats 3-4 until it determines it can respond directly to the user
Action agents are wrapped in **agent executors**, which are responsible for calling the agent, getting back an action and action input, calling the tool that the action references with the generated input, getting the output of the tool, and then passing all that information back into the agent to get the next action it should take.
Although an agent can be constructed in many ways, it typically involves these components:
- **Prompt template**: Responsible for taking the user input and previous steps and constructing a prompt
to send to the language model
- **Language model**: Takes the prompt with use input and action history and decides what to do next
- **Output parser**: Takes the output of the language model and parses it into the next action or a final answer
## Plan-and-execute agents
At a high-level a plan-and-execute agent:
1. Receives user input
2. Plans the full sequence of steps to take
3. Executes the steps in order, passing the outputs of past steps as inputs to future steps
The most typical implementation is to have the planner be a language model, and the executor be an action agent. Read more [here](/docs/modules/agents/agent_types/plan_and_execute.html).
## Get started
import GetStarted from "@snippets/modules/agents/get_started.mdx"
<GetStarted/>

View File

@@ -0,0 +1,10 @@
---
sidebar_position: 3
---
# Toolkits
Toolkits are collections of tools that are designed to be used together for specific tasks and have convenience loading methods.
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -0,0 +1,2 @@
label: 'How-to'
position: 0

View File

@@ -0,0 +1,17 @@
---
sidebar_position: 2
---
# Tools
Tools are interfaces that an agent can use to interact with the world.
## Get started
Tools are functions that agents can use to interact with the world.
These tools can be generic utilities (e.g. search), other chains, or even other agents.
Currently, tools can be loaded with the following snippet:
import GetStarted from "@snippets/modules/agents/tools/get_started.mdx"
<GetStarted/>

View File

@@ -0,0 +1 @@
label: 'Integrations'

View File

@@ -0,0 +1,2 @@
label: 'How-to'
position: 0

View File

@@ -0,0 +1,10 @@
---
sidebar_position: 5
---
# Callbacks
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
import GetStarted from "@snippets/modules/callbacks/get_started.mdx"
<GetStarted/>

View File

@@ -0,0 +1 @@
label: 'Integrations'

View File

@@ -0,0 +1,7 @@
# Analyze Document
The AnalyzeDocumentChain can be used as an end-to-end to chain. This chain takes in a single document, splits it up, and then runs it through a CombineDocumentsChain.
import Example from "@snippets/modules/chains/additional/analyze_document.mdx"
<Example/>

View File

@@ -0,0 +1,7 @@
# Self-critique chain with constitutional AI
The ConstitutionalChain is a chain that ensures the output of a language model adheres to a predefined set of constitutional principles. By incorporating specific rules and guidelines, the ConstitutionalChain filters and modifies the generated content to align with these principles, thus providing more controlled, ethical, and contextually appropriate responses. This mechanism helps maintain the integrity of the output while minimizing the risk of generating content that may violate guidelines, be offensive, or deviate from the desired context.
import Example from "@snippets/modules/chains/additional/constitutional_chain.mdx"
<Example/>

View File

@@ -0,0 +1,8 @@
---
sidebar_position: 4
---
# Additional
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -0,0 +1,8 @@
# Moderation
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so. Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model. Some API providers, like OpenAI, [specifically prohibit](https://beta.openai.com/docs/usage-policies/use-case-policy) you, or your end users, from generating some types of harmful content. To comply with this (and to just generally prevent your application from being harmful) you may often want to append a moderation chain to any LLMChains, in order to make sure any output the LLM generates is not harmful.
If the content passed into the moderation chain is harmful, there is not one best way to handle it, it probably depends on your application. Sometimes you may want to throw an error in the Chain (and have your application handle that). Other times, you may want to return something to the user explaining that the text was harmful. There could even be other ways to handle it! We will cover all these ways in this walkthrough.
import Example from "@snippets/modules/chains/additional/moderation.mdx"
<Example/>

View File

@@ -0,0 +1,7 @@
# Dynamically selecting from multiple prompts
This notebook demonstrates how to use the `RouterChain` paradigm to create a chain that dynamically selects the prompt to use for a given input. Specifically we show how to use the `MultiPromptChain` to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.
import Example from "@snippets/modules/chains/additional/multi_prompt_router.mdx"
<Example/>

View File

@@ -0,0 +1,7 @@
# Dynamically selecting from multiple retrievers
This notebook demonstrates how to use the `RouterChain` paradigm to create a chain that dynamically selects which Retrieval system to use. Specifically we show how to use the `MultiRetrievalQAChain` to create a question-answering chain that selects the retrieval QA chain which is most relevant for a given question, and then answers the question using it.
import Example from "@snippets/modules/chains/additional/multi_retrieval_qa_router.mdx"
<Example/>

View File

@@ -0,0 +1,13 @@
# Document QA
Here we walk through how to use LangChain for question answering over a list of documents. Under the hood we'll be using our [Document chains](../document.html).
import Example from "@snippets/modules/chains/additional/question_answering.mdx"
<Example/>
## Document QA with sources
import ExampleWithSources from "@snippets/modules/chains/additional/qa_with_sources.mdx"
<ExampleWithSources/>

View File

@@ -0,0 +1,16 @@
---
sidebar_position: 2
---
# Documents
These are the core chains for working with Documents. They are useful for summarizing documents, answering questions over documents, extracting information from documents, and more.
These chains all implement a common interface:
import Interface from "@snippets/modules/chains/document/combine_docs.mdx"
<Interface/>
import DocCardList from "@theme/DocCardList";
<DocCardList />

View File

@@ -0,0 +1,5 @@
# Map reduce
The map reduce documents chain first applies an LLM chain to each document individually (the Map step), treating the chain output as a new document. It then passes all the new documents to a separate combine documents chain to get a single output (the Reduce step). It can optionally first compress, or collapse, the mapped documents to make sure that they fit in the combine documents chain (which will often pass them to an LLM). This compression step is performed recursively if necessary.
![map_reduce_diagram](/img/map_reduce.jpg)

View File

@@ -0,0 +1,5 @@
# Map re-rank
The map re-rank documents chain runs an initial prompt on each document, that not only tries to complete a task but also gives a score for how certain it is in its answer. The highest scoring response is returned.
![map_rerank_diagram](/img/map_rerank.jpg)

View File

@@ -0,0 +1,12 @@
---
sidebar_position: 1
---
# Refine
The refine documents chain constructs a response by looping over the input documents and iteratively updating its answer. For each document, it passes all non-document inputs, the current document, and the latest intermediate answer to an LLM chain to get a new answer.
Since the Refine chain only passes a single document to the LLM at a time, it is well-suited for tasks that require analyzing more documents than can fit in the model's context.
The obvious tradeoff is that this chain will make far more LLM calls than, for example, the Stuff documents chain.
There are also certain tasks which are difficult to accomplish iteratively. For example, the Refine chain can perform poorly when documents frequently cross-reference one another or when a task requires detailed information from many documents.
![refine_diagram](/img/refine.jpg)

View File

@@ -0,0 +1,12 @@
---
sidebar_position: 0
---
# Stuff
The stuff documents chain ("stuff" as in "to stuff" or "to fill") is the most straightforward of the document chains. It takes a list of documents, inserts them all into a prompt and passes that prompt to an LLM.
This chain is well-suited for applications where documents are small and only a few are passed in for most calls.
![stuff_diagram](/img/stuff.jpg)

View File

@@ -0,0 +1,8 @@
---
sidebar_position: 1
---
# Foundational
import DocCardList from "@theme/DocCardList";
<DocCardList />

Some files were not shown because too many files have changed in this diff Show More