Commit Graph

2427 Commits

Author SHA1 Message Date
Eugene Yurtsev
c306364b06
langchain[patch]: Update more code to use langchain community as an optional dependency (#21170)
More code to use langchain community as an optional dependency
2024-05-02 09:05:48 -04:00
Eugene Yurtsev
94a838740e
langchain[patch]: Migrate more code in utils to use optional langchain import (#21166)
Moving is interactive util to avoid circular deps
2024-05-01 17:18:42 -04:00
Eugene Yurtsev
23fdd320bc
langchain[patch]: Migrate more code to use optional community in agents namespace (#21167) 2024-05-01 16:25:44 -04:00
Eugene Yurtsev
44602bdc20
langchain[patch],community[minor]: Move load_tools to community (#21158)
Move load tools to community
2024-05-01 16:05:41 -04:00
Eugene Yurtsev
9932f49b3e
langchain[patch]: Migrate llms to use optional community imports (#21101) 2024-05-01 16:04:45 -04:00
Eugene Yurtsev
57e8e70daa
langchain[patch]: Migrate chat models to optional community imports (#21090)
Migrate chat models to optional community imports
2024-05-01 16:04:12 -04:00
Eugene Yurtsev
2914abd747
langchain[patch]: Fix how the serializable test identifies serializable objects (#21165)
dir() will not work if we're using optional imports. The only way to do this is by using contents of __all__
2024-05-01 15:56:11 -04:00
Eugene Yurtsev
23c5d87311
langchain[patch]: Migrate utils to use optional langchain_community (#21163)
Migrate utils to use optional imports from langchain community
2024-05-01 15:24:02 -04:00
Eugene Yurtsev
bec3eee3fa
langchain[patch]: Migrate retrievers to use optional langchain community imports (#21155) 2024-05-01 14:44:44 -04:00
Eugene Yurtsev
43110daea5
langchain[patch]: Update some agent tool kits to handle community import as optional (#21157)
A few things that were not caught by the migration script
2024-05-01 14:22:54 -04:00
Eugene Yurtsev
59f10ab3e0
langchain[patch]: Migrate embeddings to optional imports (#21099) 2024-05-01 13:47:37 -04:00
Eugene Yurtsev
2f709d94d7
langchain[patch]: Migrate vectorstores to use optional langchain community imports (#21150) 2024-05-01 13:33:37 -04:00
Eugene Yurtsev
7230e430db
langchain[patch]: Migrate top level files to use optional langchain community (#21152)
Migrate a few top level files to treat langchain community as an optional dependency
2024-05-01 13:23:03 -04:00
Eugene Yurtsev
7a39fe60da
langchain[patch]: Migrate utilities to handle langchain community as optional (#21149) 2024-05-01 13:09:34 -04:00
Eugene Yurtsev
b879184595
langchain[patch]: embedddings distance move import of openai embeddings into local scope (#21148) 2024-05-01 12:51:51 -04:00
Eugene Yurtsev
0e5bf16d00
langchain[patch]: Migrate document loaders to use optional langchain community imports (#21095) 2024-05-01 11:26:25 -04:00
Eugene Yurtsev
1ce1a10f2b
langchain[patch],community[minor]: Move graph index creator (#20795)
Move graph index creator to community
2024-05-01 10:04:30 -04:00
Eugene Yurtsev
aa0bc7467c
langchain[patch]: Migrate agents module into optional imports for community (#21088) 2024-05-01 09:36:03 -04:00
Eugene Yurtsev
86ff8a3fb4
langchain[patch]: Update docstore module to use optional imports from community (#21091) 2024-05-01 09:35:05 -04:00
Eugene Yurtsev
d640605694
langchain[patch]: Migrate chat loaders to optional community imports (#21089)
Migrate chat loaders to optional community imports
2024-05-01 09:34:44 -04:00
Eugene Yurtsev
2fcab9acd9
langchain[patch]: Upgrade storage to treat langchain community as optional (#21105) 2024-05-01 09:33:31 -04:00
Erick Friis
14422a4220
langchain: fix core dep (#21128) 2024-04-30 14:55:12 -07:00
Erick Friis
6c938da302
langchain: release 0.1.17 (#21125) 2024-04-30 14:43:59 -07:00
Eugene Yurtsev
bf95414758
langchain[minor]: enhance unit test to test imports recursively (#21122) 2024-04-30 17:05:53 -04:00
Eugene Yurtsev
e4f51f59a2
langchain[patch]: Migrate tools to treat community imports as optional (#21117)
Migrate tools to treat community imports as optional
2024-04-30 16:26:18 -04:00
Eugene Yurtsev
9e788f09c6
langchain[patch]: Migrate output parsers to support optional community imports (#21103)
Migrate output parsers
2024-04-30 16:24:29 -04:00
Eugene Yurtsev
3853fe9f64
langchain[patch]: Migrate graphs to use optional community imports (#21100)
Migrate graphs to use optional community imports.
2024-04-30 16:24:06 -04:00
Eugene Yurtsev
8658d52587
langchain[patch]: Upgrade prompts to optional imports (#21078)
Upgrades prompts module to use optional imports.

This code was generated with a migration script, but had to be adjusted
manually a bit.

Testing in preparation for applying this code modification across the
rest of the modules in langchain package to reverse the dependency
between langchain community and langchain.
2024-04-30 16:23:39 -04:00
Eugene Yurtsev
9b6d04a187
langchain[patch]: Migrate document transformers (#21098)
Migrate document transformers
2024-04-30 16:20:02 -04:00
Eugene Yurtsev
aec13a6123
langchain[patch]: Migrate callbacks module to use optional imports for community (#21086) 2024-04-30 16:19:13 -04:00
Eugene Yurtsev
3c064a757f
core[minor],langchain[patch],community[patch]: Move storage interfaces to core (#20750)
* Move storage interface to core
* Move in memory and file system implementation to core
2024-04-30 13:14:26 -04:00
Charlie Marsh
8f38b7a725
multiple: Remove unnecessary Ruff suppression comments (#21050)
## Summary

I ran `ruff check --extend-select RUF100 -n` to identify `# noqa`
comments that weren't having any effect in Ruff, and then `ruff check
--extend-select RUF100 -n --fix` on select files to remove all of the
unnecessary `# noqa: F401` violations. It's possible that these were
needed at some point in the past, but they're not necessary in Ruff
v0.1.15 (used by LangChain) or in the latest release.

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-30 17:13:48 +00:00
Eugene Yurtsev
c8f18a2524
langchain[patch]: Update import handling in adapters (#21079) 2024-04-30 10:55:29 -04:00
Eugene Yurtsev
845d8e0025
langchain[patch]: Update handling of deprecation warnings (#21083)
Chains should not be emitting deprecation warnings.
2024-04-30 10:30:23 -04:00
Christophe Bornet
5c77f45b06
community[minor]: Add async methods to CassandraCache and CassandraSemanticCache (#20654) 2024-04-30 10:27:44 -04:00
Leonid Ganeline
08d08d7c83
docs: langchain docstrings updates (#21032)
Added missed docstings. Formatted docstrings into a consistent format.
2024-04-29 17:40:44 -04:00
Eugene Yurtsev
f479a337cc
langchain[patch]: replace deprecated imports with imports from langchain_core (#21033)
* Output of running the migration script.
* Ran only against langchain code itself and not the unit tests.
2024-04-29 15:34:31 -04:00
Eugene Yurtsev
82d4afcac0
langchain[minor]: Code to handle dynamic imports (#20893)
Proposing to centralize code for handling dynamic imports. This allows treating langchain-community as an optional dependency.

---

The proposal is to scan the code base and to replace all existing imports with dynamic imports using this functionality.
2024-04-29 15:34:03 -04:00
Naveen Tatikonda
8bbdb4f6a0
community[patch]: Add OpenSearch as semantic cache (#20254)
### Description
Use OpenSearch vector store as Semantic Cache.

### Twitter Handle
**@OpenSearchProj**

---------

Signed-off-by: Naveen Tatikonda <navtat@amazon.com>
Co-authored-by: Harish Tatikonda <harishtatikonda@Harishs-MacBook-Air.local>
Co-authored-by: EC2 Default User <ec2-user@ip-172-31-31-155.ec2.internal>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-27 00:20:24 +00:00
Leonid Kuligin
893a924b90
core[minor], community[patch], langchain[patch]: move BaseChatLoader to core (#19607)
Thank you for contributing to LangChain!

- [ ] **PR title**: "core: move BaseChatLoader and BaseToolkit from
community"


- [ ] **PR message**: move BaseChatLoader and BaseToolkit

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-26 21:45:51 +00:00
ccurme
bf16cefd18
langchain: deprecate create_structured_output_runnable (#20933) 2024-04-26 14:00:40 -04:00
ccurme
891ae37437
langchain: support PineconeVectorStore in self query retriever (#20905)
`langchain_pinecone.Pinecone` is deprecated in favor of
`PineconeVectorStore`, and is currently a subclass of
`PineconeVectorStore`.
```python
@deprecated(since="0.0.3", removal="0.2.0", alternative="PineconeVectorStore")
class Pinecone(PineconeVectorStore):
    """Deprecated. Use PineconeVectorStore instead."""

    pass
```
2024-04-25 20:54:58 +00:00
Bagatur
5b83130855
core[minor], langchain[patch], community[patch]: mv StructuredQuery (#20849)
mv StructuredQuery to core
2024-04-25 09:40:26 -07:00
Ivaylo Bratoev
7c5063ef60
infra: fix how Poetry is installed in the dev container (#20521)
Currently, when a new dev container is created, poetry does not work in
it with the error "No module named 'rapidfuzz'".

Install Poetry outside the project venv so that poetry and project
dependencies do not get mixed. Use pipx to install poetry securely in
its own isolated environment.

Issue: #12237

Twitter handle: https://twitter.com/ibratoev

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 17:33:25 -07:00
ccurme
481d3855dc
patch: remove usage of llm, chat model __call__ (#20788)
- `llm(prompt)` -> `llm.invoke(prompt)`
- `llm(prompt=prompt` -> `llm.invoke(prompt)` (same with `messages=`)
- `llm(prompt, callbacks=callbacks)` -> `llm.invoke(prompt,
config={"callbacks": callbacks})`
- `llm(prompt, **kwargs)` -> `llm.invoke(prompt, **kwargs)`
2024-04-24 19:39:23 -04:00
Nikita Pokidyshev
9e983c9500
langchain[patch]: fix agent_token_buffer_memory not working with openai tools (#20708)
- **Description:** fix a bug in the agent_token_buffer_memory
- **Issue:** agent_token_buffer_memory was not working with openai tools
- **Dependencies:** None
- **Twitter handle:** @pokidyshef
2024-04-24 15:51:58 -07:00
Alex Lee
243ba71b28
langchain[patch]: add aprep_output method to langchain/chains/base.py (#20748)
## Description

Add `aprep_output` method to `langchain/chains/base.py`. Some downstream
`ChatMessageHistory` objects that use async connections require an async
way to append to the context.

It turned out that `ainvoke()` was calling `prep_output` which is
synchronous.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-24 22:16:25 +00:00
Eugene Yurtsev
d8aa72f51d
core[minor],langchain[patch]: Move base indexing interface and logic to core (#20667)
This PR moves the interface and the logic to core.

The following changes to namespaces:


`indexes` -> `indexing`
`indexes._api` -> `indexing.api`


Testing code is intentionally duplicated for now since it's testing
different
implementations of the record manager (in-memory vs. SQL).

Common logic will need to be pulled out into the test client.


A follow up PR will move the SQL based implementation outside of
LangChain.
2024-04-24 13:18:42 -04:00
Eugene Yurtsev
a7c347ab35
langchain[patch]: Update evaluation logic that instantiates a default LLM (#20760)
Favor langchain_openai over langchain_community for evaluation logic.

---------

Co-authored-by: ccurme <chester.curme@gmail.com>
2024-04-23 16:09:32 -04:00
Eugene Yurtsev
72f720fa38
langchain[major]: Remove default instantations of LLMs from VectorstoreToolkit (#20794)
Remove default instantiation from vectorstore toolkit.
2024-04-23 16:09:14 -04:00
ccurme
42de5168b1
langchain: deprecate LLMChain, RetrievalQA, and ConversationalRetrievalChain (#20751) 2024-04-23 15:55:34 -04:00
Eugene Yurtsev
1c89e45c14
langchain[major]: breaks some chains to remove hidden defaults (#20759)
Breaks some chains in langchain to remove hidden chat model / llm instantiation.
2024-04-23 11:11:40 -04:00
Eugene Yurtsev
ad6b5f84e5
community[patch],core[minor]: Move in memory cache implementation to core (#20753)
This PR moves the InMemoryCache implementation from community to core.
2024-04-23 11:10:11 -04:00
Eugene Yurtsev
645b1e142e
core[minor],langchain[patch],community[patch]: Move InMemory and File implementations of Chat History to core (#20752)
This PR moves the implementations for chat history to core. So it's
easier to determine which dependencies need to be broken / add
deprecation warnings
2024-04-23 10:22:11 -04:00
Eugene Yurtsev
936c6cc74a
langchain[patch]: Add missing deprecation for openai adapters (#20668)
Add missing deprecation for openai adapters
2024-04-22 14:05:55 -04:00
ccurme
c010ec8b71
patch: deprecate (a)get_relevant_documents (#20477)
- `.get_relevant_documents(query)` -> `.invoke(query)`
- `.get_relevant_documents(query=query)` -> `.invoke(query)`
- `.get_relevant_documents(query, callbacks=callbacks)` ->
`.invoke(query, config={"callbacks": callbacks})`
- `.get_relevant_documents(query, **kwargs)` -> `.invoke(query,
**kwargs)`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-22 11:14:53 -04:00
Bagatur
d0cee65cdc
langchain[patch]: langchain-pinecone self query support (#20702) 2024-04-21 15:42:39 -07:00
Leonid Ganeline
06d18c106d
langchain[patch]: example_selector import fix (#20676)
Cleaned up updated imports
2024-04-19 21:42:18 -04:00
Leonid Ganeline
d6470aab60
langchain: dosctore import fix (#20678)
Cleaned up imports
2024-04-19 21:41:36 -04:00
Souls-R
36084e7500
docs: fix variable name typo in example code (#20658)
This pull request corrects a mistake in the variable name within the
example code. The variable doc_schema has been changed to dog_schema to
fix the error.
2024-04-19 14:08:25 +00:00
Sivaudha
baedc3ec0a
langchain[minor]: Databricks vector search self query integration (#20627)
- Enable self querying feature for databricks vector search

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-19 03:44:38 +00:00
Leonid Ganeline
95dc90609e
experimental[patch]: prompts import fix (#20534)
Replaced `from langchain.prompts` with `from langchain_core.prompts`
where it is appropriate.
Most of the changes go to `langchain_experimental`
Similar to #20348
2024-04-18 16:09:11 -04:00
aditya thomas
cea379e7c7
community, core[callbacks]: move FileCallbackHandler from community to core (#20495)
**Description:** Move `FileCallbackHandler` from community to core
**Issue:** #20493 
**Dependencies:** None

(imo) `FileCallbackHandler` is a built-in LangChain callback handler
like `StdOutCallbackHandler` and should properly be in in core.
2024-04-17 22:29:30 -04:00
pjb157
479be3cc91
community[minor]: Unify Titan Takeoff Integrations and Adding Embedding Support (#18775)
**Community: Unify Titan Takeoff Integrations and Adding Embedding
Support**

 **Description:** 
Titan Takeoff no longer reflects this either of the integrations in the
community folder. The two integrations (TitanTakeoffPro and
TitanTakeoff) where causing confusion with clients, so have moved code
into one place and created an alias for backwards compatibility. Added
Takeoff Client python package to do the bulk of the work with the
requests, this is because this package is actively updated with new
versions of Takeoff. So this integration will be far more robust and
will not degrade as badly over time.

**Issue:**
Fixes bugs in the old Titan integrations and unified the code with added
unit test converge to avoid future problems.

**Dependencies:**
Added optional dependency takeoff-client, all imports still work without
dependency including the Titan Takeoff classes but just will fail on
initialisation if not pip installed takeoff-client

**Twitter**
@MeryemArik9

Thanks all :)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-17 01:43:35 +00:00
Prashanth Rao
295b9b704b
community[patch]: Improve Kuzu Cypher generation prompt (#20481)
- [x] **PR title**: "community: improve kuzu cypher generation prompt"

- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** Improves the Kùzu Cypher generation prompt to be more
robust to open source LLM outputs
    - **Issue:** N/A
    - **Dependencies:** N/A
    - **Twitter handle:** @kuzudb

- [x] **Add tests and docs**: If you're adding a new integration, please
include
No new tests (non-breaking. change)

- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
2024-04-16 18:01:36 -07:00
ccurme
22da9f5f3f
update scheduled tests (#20526)
repurpose scheduled tests to test over provider packages
2024-04-16 16:49:46 -04:00
Leonid Ganeline
45d045b2c5
core[minor], langchain[patch]: tools dependencies refactoring (#18759)
The `langchain.tools`
[namespace](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.tools)
can be completely eliminated by moving one class and 3 functions into
`core`. It makes sense since the class and functions are very core.
2024-04-16 14:15:09 -04:00
ccurme
38faa74c23
community[patch]: update use of deprecated llm methods (#20393)
.predict and .predict_messages for BaseLanguageModel and BaseChatModel
2024-04-12 17:28:23 -04:00
Leonid Ganeline
e512d3c6a6
langchain: callbacks imports fix (#20348)
Replaced all `from langchain.callbacks` into `from
langchain_core.callbacks` .
Changes in the `langchain` and `langchain_experimental`

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-04-12 20:13:14 +00:00
Eugene Yurtsev
6470b30173
langchain[patch]: Add deprecation warning to extraction chains (#20224)
Add deprecation warnings to extraction chains
2024-04-12 10:24:32 -04:00
Eugene Yurtsev
b65a1d4cfd
langchain[patch]: Add another unit test for indexing code (#20387)
Add another unit test for indexing
2024-04-12 10:19:18 -04:00
Bagatur
6608089030
langchain[patch]: Release 0.1.16 (#20335) 2024-04-11 09:28:37 -07:00
Bagatur
e936fba428
langchain[patch]: agents check prompt partial vars (#20303) 2024-04-11 03:55:09 -07:00
Yuki Watanabe
eef19954f3
core[patch]: fix duplicated kwargs in _load_sql_databse_chain (#19908)
`kwargs` is specified twice in [this
line](3218463f6a/libs/langchain/langchain/chains/loading.py (L386)),
causing runtime error when passing any keyword arguments.
2024-04-10 12:20:28 -07:00
Leonid Ganeline
4cb5f4c353
community[patch]: import flattening fix (#20110)
This PR should make it easier for linters to do type checking and for IDEs to jump to definition of code.

See #20050 as a template for this PR.
- As a byproduct: Added 3 missed `test_imports`.
- Added missed `SolarChat` in to __init___.py Added it into test_import
ut.
- Added `# type: ignore` to fix linting. It is not clear, why linting
errors appear after ^ changes.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-10 13:01:19 -04:00
ccurme
21c1ce0bc1
update agents to use tool call messages (#20074)
```python
from langchain.agents import AgentExecutor, create_tool_calling_agent, tool
from langchain_anthropic import ChatAnthropic
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant"),
        MessagesPlaceholder("chat_history", optional=True),
        ("human", "{input}"),
        MessagesPlaceholder("agent_scratchpad"),
    ]
)
model = ChatAnthropic(model="claude-3-opus-20240229")

@tool
def magic_function(input: int) -> int:
    """Applies a magic function to an input."""
    return input + 2

tools = [magic_function]

agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": "what is the value of magic_function(3)?"})
```
```
> Entering new AgentExecutor chain...

Invoking: `magic_function` with `{'input': 3}`
responded: [{'text': '<thinking>\nThe user has asked for the value of magic_function applied to the input 3. Looking at the available tools, magic_function is the relevant one to use here, as it takes an integer input and returns an integer output.\n\nThe magic_function has one required parameter:\n- input (integer)\n\nThe user has directly provided the value 3 for the input parameter. Since the required parameter is present, we can proceed with calling the function.\n</thinking>', 'type': 'text'}, {'id': 'toolu_01HsTheJPA5mcipuFDBbJ1CW', 'input': {'input': 3}, 'name': 'magic_function', 'type': 'tool_use'}]

5
Therefore, the value of magic_function(3) is 5.

> Finished chain.
{'input': 'what is the value of magic_function(3)?',
 'output': 'Therefore, the value of magic_function(3) is 5.'}
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-10 11:54:51 -04:00
Bagatur
9514bc4d67
core[minor], ...: add tool calls message (#18947)
core[minor], langchain[patch], openai[minor], anthropic[minor], fireworks[minor], groq[minor], mistralai[minor]

```python
class ToolCall(TypedDict):
    name: str
    args: Dict[str, Any]
    id: Optional[str]

class InvalidToolCall(TypedDict):
    name: Optional[str]
    args: Optional[str]
    id: Optional[str]
    error: Optional[str]

class ToolCallChunk(TypedDict):
    name: Optional[str]
    args: Optional[str]
    id: Optional[str]
    index: Optional[int]


class AIMessage(BaseMessage):
    ...
    tool_calls: List[ToolCall] = []
    invalid_tool_calls: List[InvalidToolCall] = []
    ...


class AIMessageChunk(AIMessage, BaseMessageChunk):
    ...
    tool_call_chunks: Optional[List[ToolCallChunk]] = None
    ...
```
Important considerations:
- Parsing logic occurs within different providers;
- ~Changing output type is a breaking change for anyone doing explicit
type checking;~
- ~Langsmith rendering will need to be updated:
https://github.com/langchain-ai/langchainplus/pull/3561~
- ~Langserve will need to be updated~
- Adding chunks:
- ~AIMessage + ToolCallsMessage = ToolCallsMessage if either has
non-null .tool_calls.~
- Tool call chunks are appended, merging when having equal values of
`index`.
  - additional_kwargs accumulate the normal way.
- During streaming:
- ~Messages can change types (e.g., from AIMessageChunk to
AIToolCallsMessageChunk)~
- Output parsers parse additional_kwargs (during .invoke they read off
tool calls).

Packages outside of `partners/`:
- https://github.com/langchain-ai/langchain-cohere/pull/7
- https://github.com/langchain-ai/langchain-google/pull/123/files

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-09 18:41:42 -05:00
Bagatur
e5913c8758
langchain[patch]: Release 0.1.15 (#20237) 2024-04-09 21:50:32 +00:00
Eugene Yurtsev
fe35e13083
langchain[patch]: Update unit test (#20228)
This unit test fails likely validation by the openai client.

Newer openai library seems to be doing more validation so the existing
test fails since http_client needs to be of httpx instance
2024-04-09 16:44:23 -04:00
Casper da Costa-Luis
b972f394c8
langchain[patch]: make BooleanOutputParser check words not substrings (#20064)
- **Description**: fixes BooleanOutputParser detecting sub-words ("NOW
this is likely (YES)" -> `True`, not `AmbiguousError`)
- **Issue(s)**: fixes #11408 (follow-up to #17810)
- **Dependencies**: None
- **GitHub handle**: @casperdcl

<!-- if unreviewd after a few days, @-mention one of baskaryan, efriis,
eyurtsev, hwchase17 -->

- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-09 20:43:31 +00:00
jeff kit
ac42e96e4c
community[patch], langchain[minor]: Enhance Tencent Cloud VectorDB, langchain: make Tencent Cloud VectorDB self query retrieve compatible (#19651)
- make Tencent Cloud VectorDB support metadata filtering.
- implement delete function for Tencent Cloud VectorDB.
- support both Langchain Embedding model and Tencent Cloud VDB embedding
model.
- Tencent Cloud VectorDB support filter search keyword, compatible with
langchain filtering syntax.
- add Tencent Cloud VectorDB TranslationVisitor, now work with self
query retriever.
- more documentations.

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-04-09 16:50:48 +00:00
Piyush Jain
cd7abc495a
community[minor]: add neptune analytics graph (#20047)
Replacement for PR
[#19772](https://github.com/langchain-ai/langchain/pull/19772).

---------

Co-authored-by: Dave Bechberger <dbechbe@amazon.com>
Co-authored-by: bechbd <bechbd@users.noreply.github.com>
2024-04-09 09:20:59 -05:00
Bagatur
5ae0e687b3
docs: use standard openai params (#20160)
Part of #20085
2024-04-08 10:56:53 -05:00
Marlene
2f03bc397e
Community: Updating Azure Retriever and Docs to be Azure AI Search instead of Azure Cognitive Search (#19925)
Last year Microsoft [changed the
name](https://learn.microsoft.com/en-us/azure/search/search-what-is-azure-search)
of Azure Cognitive Search to Azure AI Search. This PR updates the
Langchain Azure Retriever API and it's associated docs to reflect this
change. It may be confusing for users to see the name Cognitive here and
AI in the Microsoft documentation which is why this is needed. I've also
added a more detailed example to the Azure retriever doc page.

There are more places that need a similar update but I'm breaking it up
so the PRs are not too big 😄 Fixing my errors from the previous PR.

Twitter: @marlene_zw

Two new tests added to test backward compatibility in
`libs/community/tests/integration_tests/retrievers/test_azure_cognitive_search.py`

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
2024-04-08 11:12:41 -04:00
Christophe Bornet
7e5c1905b1
core[minor]: Add async aformat_document method (#20037) 2024-04-05 10:29:53 -04:00
Chris Papademetrious
a954dedb77
langchain[minor]: enhance LocalFileStore to allow directory/file permissions to be specified (#18857)
**Description:**
The `LocalFileStore` class can be used to create an on-disk
`CacheBackedEmbeddings` cache. However, the default `umask` settings
gives file/directory write permissions only to the original user. Once
the cache directory is created by the first user, other users cannot
write their own cache entries into the directory.

To make the cache usable by multiple users, this pull request updates
the `LocalFileStore` constructor to allow the permissions for newly
created directories and files to be specified. The specified permissions
override the default `umask` values.

For example, when configured as follows:

```python
file_store = LocalFileStore(temp_dir, chmod_dir=0o770, chmod_file=0o660)
```

then "user" and "group" (but not "other") have permissions to access the
store, which means:

* Anyone in our group could contribute embeddings to the cache.
* If we implement cache cleanup/eviction in the future, anyone in our
group could perform the cleanup.

The default values for the `chmod_dir` and `chmod_file` parameters is
`None`, which retains the original behavior of using the default `umask`
settings.

**Issue:**
Implements enhancement #18075.

**Testing:**
I updated the `LocalFileStore` unit tests to test the permissions.

---------

Signed-off-by: chrispy <chrispy@synopsys.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-04-04 16:40:16 +00:00
Jacob Lee
605c3f23e1
docs: reorg and visual refresh (#19765)
- put use cases in main sidebar
- move modules to own sidebar, rename components
- cleanup lcel section
- cleanup guides
- update font, cell highlighting

---------

Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-04-04 00:58:36 -07:00
Erick Friis
f0d5b59962
core[patch]: remove requests (#19891)
Removes required usage of `requests` from `langchain-core`, all of which
has been deprecated.

- removes Tracer V1 implementations
- removes old `try_load_from_hub` github-based hub implementations

Removal done in a way where imports will still succeed, and usage will
fail with a `RuntimeError`.
2024-04-02 20:28:10 +00:00
Max Jakob
22dbcc9441
langchain[patch]: fix ElasticsearchStore reference for self query (#19907)
Initializing self query with an ElasticsearchStore from the partners
packages failed previously, see
https://github.com/langchain-ai/langchain/discussions/18976.
2024-04-02 08:39:12 -07:00
Nuno Campos
2ae6dcdf01
core: Assign missing message ids in BaseChatModel (#19863)
- This ensures ids are stable across streamed chunks
- Multiple messages in batch call get separate ids
- Also fix ids being dropped when combining message chunks

Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-04-02 01:18:36 +00:00
Bagatur
c4eb841c37
langchain[patch]: Release 0.1.14 (#19839) 2024-03-31 21:44:01 -07:00
Guangdong Liu
b6ebddbacc
langchain[patch]: Upgrade openai's sdk and solve some interface adaptation problems. #19548 (#19785)
- #19548
- @baskaryan @eyurtsev PTAL

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-31 21:35:38 +00:00
Kenneth Choe
f98d7f7494
langchain[minor], community[minor]: add CrossEncoderReranker with HuggingFaceCrossEncoder and SagemakerEndpointCrossEncoder (#13687)
- **Description:** Support reranking based on cross encoder models
available from HuggingFace.
      - Added `CrossEncoder` schema
- Implemented `HuggingFaceCrossEncoder` and
`SagemakerEndpointCrossEncoder`
- Implemented `CrossEncoderReranker` that performs similar functionality
to `CohereRerank`
- Added `cross-encoder-reranker.ipynb` to demonstrate how to use it.
Please let me know if anything else needs to be done to make it visible
on the table-of-contents navigation bar on the left, or on the card list
on [retrievers documentation
page](https://python.langchain.com/docs/integrations/retrievers).
  - **Issue:** N/A
  - **Dependencies:** None other than the existing ones.

---------

Co-authored-by: Kenny Choe <kchoe@amazon.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-31 20:51:31 +00:00
DrKroll
c4da8d0813
langchain[patch]: load ReadFileTool (#14301)
---------

Co-authored-by: Dr. Simon Kroll <krolls@fida.de>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-30 00:46:24 +00:00
Ahmed Moubtahij
f5d4ce840f
langchain[patch]: Simplify ensemble retriever (#14427)
- **Description:** code simplification to improve readability and remove
unnecessary memory allocations.
  - **Tag maintainer**: @baskaryan, @eyurtsev, @hwchase17.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-29 16:49:49 -07:00
Guangdong Liu
cd55d587c2
langchain[patch]: Upgrade openai's sdk and solve some interface adaptation problems. (#19548)
- **Issue:** close #19534
2024-03-29 01:25:17 -07:00
Sachin Paryani
25c9f3d1d1
community[patch]: Support Streaming in Azure Machine Learning (#18246)
- [x] **PR title**: "community: Support streaming in Azure ML and few
naming changes"

- [x] **PR message**:
- **Description:** Added support for streaming for azureml_endpoint.
Also, renamed and AzureMLEndpointApiType.realtime to
AzureMLEndpointApiType.dedicated. Also, added new classes
CustomOpenAIChatContentFormatter and CustomOpenAIContentFormatter and
updated the classes LlamaChatContentFormatter and LlamaContentFormatter
to now show a deprecated warning message when instantiated.

---------

Co-authored-by: Sachin Paryani <saparan@microsoft.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-28 23:38:20 +00:00
xiaohuanshu
ecb11a4a32
langchain[patch]: fix BaseChatMemory get output data error with extra key (#18117)
**Description:** At times, BaseChatMemory._get_input_output may acquire
some extra keys such as 'intermediate_steps' (agent_executor with
return_intermediate_steps set to True) and 'messages'
(agent_executor.iter with memory). In these instances, _get_input_output
can raise an error due to the presence of multiple keys. The 'output'
field should be used as the default field in these cases.
**Issue:** #16791
2024-03-28 16:38:08 -07:00
Davide Menini
824dbc49ee
langchain[patch]: add template_tool_response arg to create_json_chat (#19696)
In this small PR I added the `template_tool_response` arg to the
`create_json_chat` function, so that users can customize this prompt in
case of need.
Thanks for your reviews!

---------

Co-authored-by: taamedag <Davide.Menini@swisscom.com>
2024-03-28 13:59:54 -07:00
Nilanjan De
239dd7c0c0
langchain[patch]: Use map() and avoid "ValueError: max() arg is an empty sequence" in MergerRetriever (#18679)
- **Issue:** When passing an empty list to MergerRetriever it fails with
error: ValueError: max() arg is an empty sequence

- **Description:** We have a use case where we dynamically select
retrievers and use MergerRetriever for merging the output of the
retrievers. We faced this issue when the retriever_docs list is empty.
Adding a default 0 for cases when retriever_docs is an empty list to
avoid "ValueError: max() arg is an empty sequence". Also, changed to use
map() which is more than twice as fast compared to the current
implementation.
```
import timeit
# Sample retriever_docs with varying lengths of sublists
retriever_docs = [[i for i in range(j)] for j in range(1, 1000)]
# First code snippet
code1 = '''
max_docs = max(len(docs) for docs in retriever_docs)
'''
# Second code snippet
code2 = '''
max_docs = max(map(len, retriever_docs), default=0)
'''
# Benchmarking
time1 = timeit.timeit(stmt=code1, globals=globals(), number=10000)
time2 = timeit.timeit(stmt=code2, globals=globals(), number=10000)
# Output
print(f"Execution time for code snippet 1: {time1} seconds")
print(f"Execution time for code snippet 2: {time2} seconds")
```

- **Dependencies:** none
2024-03-27 23:52:57 -07:00
Juan Jose Miguel Ovalle Villamil
51baa1b5cf
langchain[patch]: fix-cohere-reranker-rerank-method with cohere v5 (#19486)
#### Description
Fixed the following error with `rerank` method from `CohereRerank`:
```
---> [79](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jjmov99/legal-colombia/~/legal-colombia/.venv/lib/python3.11/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py:79) results = self.client.rerank(
     [80](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jjmov99/legal-colombia/~/legal-colombia/.venv/lib/python3.11/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py:80)     query, docs, model, top_n=top_n, max_chunks_per_doc=max_chunks_per_doc
     [81](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jjmov99/legal-colombia/~/legal-colombia/.venv/lib/python3.11/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py:81) )
     [82](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jjmov99/legal-colombia/~/legal-colombia/.venv/lib/python3.11/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py:82) result_dicts = []
     [83](https://vscode-remote+wsl-002bubuntu.vscode-resource.vscode-cdn.net/home/jjmov99/legal-colombia/~/legal-colombia/.venv/lib/python3.11/site-packages/langchain/retrievers/document_compressors/cohere_rerank.py:83) for res in results.results:

TypeError: BaseCohere.rerank() takes 1 positional argument but 4 positional arguments (and 2 keyword-only arguments) were given
```
This was easily fixed going from this:
```
   def rerank(
        self,
        documents: Sequence[Union[str, Document, dict]],
        query: str,
        *,
        model: Optional[str] = None,
        top_n: Optional[int] = -1,
        max_chunks_per_doc: Optional[int] = None,
    ) -> List[Dict[str, Any]]:
         ...
        if len(documents) == 0:  # to avoid empty api call
            return []
        docs = [
            doc.page_content if isinstance(doc, Document) else doc for doc in documents
        ]
        model = model or self.model
        top_n = top_n if (top_n is None or top_n > 0) else self.top_n
        results = self.client.rerank(
            query, docs, model, top_n=top_n, max_chunks_per_doc=max_chunks_per_doc
        )
        result_dicts = []
        for res in results:
            result_dicts.append(
                {"index": res.index, "relevance_score": res.relevance_score}
            )
        return result_dicts
```
to this:
```
    def rerank(
        self,
        documents: Sequence[Union[str, Document, dict]],
        query: str,
        *,
        model: Optional[str] = None,
        top_n: Optional[int] = -1,
        max_chunks_per_doc: Optional[int] = None,
    ) -> List[Dict[str, Any]]:
         ...
        if len(documents) == 0:  # to avoid empty api call
            return []
        docs = [
            doc.page_content if isinstance(doc, Document) else doc for doc in documents
        ]
        model = model or self.model
        top_n = top_n if (top_n is None or top_n > 0) else self.top_n
        results = self.client.rerank(
            query=query, documents=docs, model=model, top_n=top_n, max_chunks_per_doc=max_chunks_per_doc <-------------
        )
        result_dicts = []
        for res in results.results:  <-------------
            result_dicts.append(
                {"index": res.index, "relevance_score": res.relevance_score}
            )
        return result_dicts
```
#### Unit & Integration tests
I added a unit test to check the behaviour of `rerank`. Also fixed the
original integration test which was failing.

#### Format & Linting
Everything worked properly with `make lint_diff`, `make format_diff` and
`make format`. However I noticed an error coming from other part of the
library when doing `make lint`:

```
(langchain-py3.9) ➜  langchain git:(master) make format
[ "." = "" ] || poetry run ruff format .
1636 files left unchanged
[ "." = "" ] || poetry run ruff --select I --fix .
(langchain-py3.9) ➜  langchain git:(master) make lint
./scripts/check_pydantic.sh .
./scripts/lint_imports.sh
poetry run ruff .
[ "." = "" ] || poetry run ruff format . --diff
1636 files already formatted
[ "." = "" ] || poetry run ruff --select I .
[ "." = "" ] || mkdir -p .mypy_cache && poetry run mypy . --cache-dir .mypy_cache
langchain/agents/openai_assistant/base.py:252: error: Argument "file_ids" to "create" of "Assistants" has incompatible type "Optional[Any]"; expected "Union[list[str], NotGiven]"  [arg-type]
langchain/agents/openai_assistant/base.py:374: error: Argument "file_ids" to "create" of "AsyncAssistants" has incompatible type "Optional[Any]"; expected "Union[list[str], NotGiven]"  [arg-type]
Found 2 errors in 1 file (checked 1634 source files)
make: *** [Makefile:65: lint] Error 1
```

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-28 06:32:03 +00:00
William FH
5c41f4083e
[Evals] Fix function calling support (#19658)
Current implementation is overzealous in validating chat datasets

Fixes
[#langsmith-sdk:557](https://github.com/langchain-ai/langsmith-sdk/issues/557)
2024-03-27 17:23:35 -07:00
Evgenii Zheltonozhskii
5b1f9c6d3a
infra: Consistent lxml requirements (#19520)
Update the dependency for lxml to be consistent among different
packages; should fix
https://github.com/langchain-ai/langchain/issues/19040
2024-03-27 20:27:59 +00:00
Christophe Bornet
9954c6a38e
langchain[minor]: Add async methods to EncoderBackedStore (#19597)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-27 17:36:36 +00:00
Christophe Bornet
7c2578bd55
langchain[patch]: Add async methods to EmbeddingRouterChain (#19603) 2024-03-26 14:33:36 -07:00
Christophe Bornet
b3d7b5a653
langchain[patch[: Add async methods to TimeWeightedVectorStoreRetriever (#19606) 2024-03-26 14:03:47 -07:00
Christophe Bornet
a7274f006e
langchain[patch]: Add async methods to VectorstoreIndexCreator (#19582) 2024-03-26 13:57:13 -07:00
Yuki Watanabe
cfecbda48b
community[minor]: Allow passing allow_dangerous_deserialization when loading LLM chain (#18894)
### Issue
Recently, the new `allow_dangerous_deserialization` flag was introduced
for preventing unsafe model deserialization that relies on pickle
without user's notice (#18696). Since then some LLMs like Databricks
requires passing in this flag with true to instantiate the model.

However, this breaks existing functionality to loading such LLMs within
a chain using `load_chain` method, because the underlying loader
function
[load_llm_from_config](f96dd57501/libs/langchain/langchain/chains/loading.py (L40))
 (and load_llm) ignores keyword arguments passed in. 

### Solution
This PR fixes this issue by propagating the
`allow_dangerous_deserialization` argument to the class loader iff the
LLM class has that field.

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-03-26 11:07:55 -04:00
Shotaro Sano
55c624a694
infra: Resolve the endless dependency resolution during the build of dev.Dockerfile by copying poetry.lock (#19465)
## Description
This PR proposes a modification to the `libs/langchain/dev.Dockerfile`
configuration to copy the `libs/langchain/poetry.lock` into the working
directory. The change aims to address the issue where the Poetry install
command, the last command in the `dev.Dockerfile`, takes excessively
long hours, and to ensure the reproducibility of the poetry environment
in the devcontainer.

## Problem
The `dev.Dockerfile`, prepared for development environments such as
`.devcontainer`, encounters an unending dependency resolution when
attempting the Poetry installation.

### Steps to Reproduce
Execute the following build command: 

```bash
docker build -f libs/langchain/dev.Dockerfile .
```

### Current Behavior
The Docker build process gets stuck at the following step, which, in my
experience, did not conclude even after an entire night:

```
 => [langchain-dev-dependencies 4/6] COPY libs/community/ ../community/                                                                                0.9s
 => [langchain-dev-dependencies 5/6] COPY libs/text-splitters/ ../text-splitters/                                                                      0.0s
 => [langchain-dev-dependencies 6/6] RUN poetry install --no-interaction --no-ansi --with dev,test,docs                                               12.3s
 => => # Updating dependencies                                                                                                                             
 => => # Resolving dependencies...  
```

### Expected Behavior
The Docker build completes in a realistic timeframe. By applying this
PR, the build finishes within a few minutes.

### Analysis
The complexity of LangChain's dependencies has reached a point where
Poetry is required to resolve dependencies akin to threading a needle.
Consequently, poetry install fails to complete in a practical timeframe.

## Solution
The solution for dependency resolution is already recorded in
`libs/langchain/poetry.lock`, so we can use it. When copying
`project.toml` and `poetry.toml`, the `poetry.lock` located in the same
directory should also be copied.

```diff
# Copy only the dependency files for installation
-COPY libs/langchain/pyproject.toml libs/langchain/poetry.toml ./
+COPY libs/langchain/pyproject.toml libs/langchain/poetry.toml libs/langchain/poetry.lock ./
```

## Note
I am not intimately familiar with the historical context of the
`dev.Dockerfile` and thus do not know why `poetry.lock` has not been
copied until now. It might have been an oversight, or perhaps dependency
resolution used to complete quickly even without the `poetry.lock` file
in the past. However, if there are deliberate reasons why copying
`poetry.lock` is not advisable, please just close this PR.
2024-03-26 10:54:53 -04:00
Christophe Bornet
999365186b
langchain[major]: Use InMemoryVectorStore by default in VectorstoreIndexCreator (#19575)
This is a small breaking change but I think it should be done as:
* No external dependency needs to be installed anymore for the default
to work
* It is vendor-neutral
2024-03-26 10:01:23 -04:00
Aayush Kataria
03c38005cb
community[patch]: Fixing some caching issues for AzureCosmosDBSemanticCache (#18884)
Fixing some issues for AzureCosmosDBSemanticCache
- Added the entry for "AzureCosmosDBSemanticCache" which was missing in
langchain/cache.py
- Added application name when creating the MongoClient for the
AzureCosmosDBVectorSearch, for tracking purposes.

@baskaryan, can you please review this PR, we need this to go in asap.
These are just small fixes which we found today in our testing.
2024-03-25 19:06:17 -07:00
billytrend-cohere
63343b4987
cohere[patch]: add cohere as a partner package (#19049)
Description: adds support for langchain_cohere

---------

Co-authored-by: Harry M <127103098+harry-cohere@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-25 20:23:47 +00:00
Zachary Wilkins
e1a6341940
langchain: Passthrough batch_size on index()/aindex() calls (#19443)
**Description:** This change passes through `batch_size` to
`add_documents()`/`aadd_documents()` on calls to `index()` and
`aindex()` such that the documents are processed in the expected batch
size.
**Issue:** #19415
**Dependencies:** N/A
**Twitter handle:** N/A
2024-03-25 11:58:29 -04:00
Christophe Bornet
63898dbda0
langchain[patch]: Use async memory in Chain when needed (#19429) 2024-03-24 23:49:00 -07:00
Christophe Bornet
1b813fe6fe
langchain[patch]: Add async methods to VectorStoreRetrieverMemory (#19408) 2024-03-22 15:44:24 -07:00
ccurme
8a2528c34a
[langchain] fix OpenAIAssistantRunnable.create_assistant (#19081)
- **Description:** OpenAI assistants support some pre-built tools (e.g.,
`"retrieval"` and `"code_interpreter"`) and expect these as `{"type":
"code_interpreter"}`. This may have been upset by
https://github.com/langchain-ai/langchain/pull/18935
- **Issue:** https://github.com/langchain-ai/langchain/issues/19057
2024-03-22 13:23:19 -04:00
Bagatur
d95ea3550e
langchain[patch]: Release 0.1.13 (#19351) 2024-03-20 18:25:12 +00:00
Eugene Yurtsev
aa9ccca775
langchain[patch]: Add tests for indexing (#19342)
This PR adds tests for the indexing API
2024-03-20 13:00:22 -04:00
mackong
d9396bdec1
langchain[patch]: add stop for various non-openai agents (#19333)
* Description: add stop for various non-openai agents.
* Issue: N/A
* Dependencies: N/A

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-20 11:34:10 -04:00
Chris Papademetrious
305d74c67a
core: implement a batch_size parameter for CacheBackedEmbeddings (#18070)
**Description:**

Currently, `CacheBackedEmbeddings` computes vectors for *all* uncached
documents before updating the store. This pull request updates the
embedding computation loop to compute embeddings in batches, updating
the store after each batch.

I noticed this when I tried `CacheBackedEmbeddings` on our 30k document
set and the cache directory hadn't appeared on disk after 30 minutes.

The motivation is to minimize compute/data loss when problems occur:

* If there is a transient embedding failure (e.g. a network outage at
the embedding endpoint triggers an exception), at least the completed
vectors are written to the store instead of being discarded.
* If there is an issue with the store (e.g. no write permissions), the
condition is detected early without computing (and discarding!) all the
vectors.

**Issue:**
Implements enhancement #18026.

**Testing:**
I was unable to run unit tests; details in [this
post](https://github.com/langchain-ai/langchain/discussions/15019#discussioncomment-8576684).

---------

Signed-off-by: chrispy <chrispy@synopsys.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-19 18:55:43 +00:00
William FH
89af30807b
Permit function eval on llm data type (#19287) 2024-03-19 11:53:50 -07:00
Frederico Wu
f36418a5b0
langchain: creating assistants with file_ids (#19199)
Changing OpenAIAssistantRunnable.create_assistant to send the `file_ids`
parameter to openai.beta.assistants.create

Co-authored-by: Frederico Wu <fred.diaswu@coxautoinc.com>
2024-03-18 21:34:03 -07:00
Simon Stone
58c7687174
langchain: preserve document metadata in FlashrankRerank (#19148)
**Description:** Preserves document metadata in `FlashrankRerank`
    - **Issue:** #19142
    - **Dependencies:** None
    - **Twitter handle:** n/a

---------

Co-authored-by: Simon Stone <simon.stone@dartmouth.edu>
2024-03-19 04:15:18 +00:00
Shotaro Sano
ca9c8c58ea
text-splitters, infra: fix libs/langchain/dev.Dockerfile so that the text-splitter directory is copied before poetry installation (#19214)
## Description
This PR modifies the settings in `libs/langchain/dev.Dockerfile` to
ensure that the `text-splitters` directory is copied before the poetry
installation process begins.

Without this modification, the `docker build` command fails for
`dev.Dockerfile`, preventing the setup of some development environments,
including `.devcontainer`.

## Bug Details

### Repro
Run the following command:

```bash
docker build -f libs/langchain/dev.Dockerfile .
```

### Current Behavior
The docker build command fails, raising the following error:

```
...
 => [langchain-dev-dependencies 4/5] COPY libs/community/ ../community/                                                                                0.4s
 => ERROR [langchain-dev-dependencies 5/5] RUN poetry install --no-interaction --no-ansi --with dev,test,docs                                          1.1s
------                                                                                                                                                      
 > [langchain-dev-dependencies 5/5] RUN poetry install --no-interaction --no-ansi --with dev,test,docs:
#13 0.970 
#13 0.970 Directory ../text-splitters does not exist
------
executor failed running [/bin/sh -c poetry install --no-interaction --no-ansi --with dev,test,docs]: exit code: 1
```

### Expected Behavior
The `docker build` command successfully completes without the poetry
error.

### Analysis
The error occurs because the `text-splitters` directory is not copied
into the build environment, unlike the other packages under the `libs`
directory. I suspect that the `COPY` setting was overlooked since
`text-splitters` was separated in a recent PR.

## Fix
Add the following lines to the `libs/langchain/dev.Dockerfile`:

```dockerfile
# Copy the text-splitters library for installation
COPY libs/text-splitters/ ../text-splitters/
```
2024-03-18 20:45:35 -07:00
Erick Friis
95904fe443
langchain[patch]: update base imports to core (#19248)
still deprecated, but was misleading before
2024-03-19 03:17:07 +00:00
Eugene Yurtsev
0ddfe7fc9d
langchain[patch]: make hub work with older langchainhub versions (#19076)
Make it work with older clients
2024-03-15 15:37:52 -07:00
Eugene Yurtsev
745d2476a2
langchain: upgrade mypy (#19163)
Update mypy in langchain
2024-03-15 16:37:09 -04:00
Erick Friis
781aee0068
community, langchain, infra: revert store extended test deps outside of poetry (#19153)
Reverts langchain-ai/langchain#18995

Because it makes installing dependencies in python 3.11 extended testing
take 80 minutes
2024-03-15 17:10:47 +00:00
Erick Friis
9e569d85a4
community, langchain, infra: store extended test deps outside of poetry (#18995)
poetry can't reliably handle resolving the number of optional "extended
test" dependencies we have. If we instead just rely on pip to install
extended test deps in CI, this isn't an issue.
2024-03-15 05:55:30 +00:00
Nuno Campos
508f75853c
core[patch]: Change structured prompt lc id to match js (#19099) 2024-03-14 20:02:52 -07:00
Erick Friis
873d06c009
langchain[patch]: release 0.1.12 (#18999) 2024-03-13 00:22:21 +00:00
Roshan Santhosh
acf1ecc081
langchain[patch]: update llm_router.py (#18865)
Issue : _call method of LLMRouterChain uses predict_and_parse, which is
slated for deprecation.

Description : Instead of using predict_and_parse, this replaces it with
individual predict and parse functions.
2024-03-11 22:30:07 -07:00
Bagatur
e0e688a277
core[minor]: generation info on msg (#18592)
related to #16403 #17188
2024-03-12 04:43:17 +00:00
Tomaz Bratanic
a28be31a96
Switch to md5 for deduplication in neo4j integrations (#18846)
Deduplicate documents using MD5 of the page_content. Also allows for
custom deduplication with graph ingestion method by providing metadata
id attribute

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-03-09 13:28:55 -08:00
Erick Friis
b48865bf94
langchain[patch]: attach hub metadata (#18830) 2024-03-08 18:40:49 -08:00
Erick Friis
a88f62ec3c
langchain[patch]: getattr import from langchain.chains (#18160) 2024-03-08 10:36:14 -08:00
Bagatur
3e29c04213
core[minor]: add BaseMessage.response_metadata (#18699) 2024-03-08 09:35:56 -08:00
Bagatur
bc6249c889
langchain[patch]: runnable agent streaming param (#18761)
Usage:

```python
agent = RunnableAgent(runnable=runnable, .., stream_runnable=False)
```
or for convenience
```python
agent_executor = AgentExecutor(agent=agent, ..., stream_runnable=False)
```
2024-03-07 20:53:53 -08:00
Eugene Yurtsev
e188d4ecb0
Add dangerous parameter to requests tool (#18697)
The tools are already documented as dangerous. Not clear whether adding
an opt-in parameter is necessary or not
2024-03-07 15:10:56 -05:00
Dounx
ad48f55357
community[minor]: add Yuque document loader (#17924)
This pull request support loading documents from Yuque with Langchain.

Yuque is a professional cloud-based knowledge base for team
collaboration in documentation.

Website: https://www.yuque.com
OpenAPI: https://www.yuque.com/yuque/developer/openapi
2024-03-05 15:54:07 -08:00
Hech
6a08134661
community[patch], langchain[minor]: Add retriever self_query and score_threshold in DingoDB (#18106) 2024-03-05 15:47:29 -08:00
Bagatur
5fc67ca2c7
langchain[patch]: Release 0.1.11 (#18558) 2024-03-04 23:58:34 -08:00
William FH
ca1d42785d
Evals wording (#18542) 2024-03-04 16:32:33 -08:00
William FH
30ccc009e6
[Evals] Support list examples by dataset version tag (#18534)
previously only supported by timestamp
2024-03-04 14:23:32 -08:00
William FH
1eec67e8fe
Evaluate on Version (#18471) 2024-03-03 17:47:35 -08:00
Harrison Chase
73d653324f
[Evals] Session-level feedback (#18463)
Co-authored-by: William Fu-Hinthorn <13333726+hinthornw@users.noreply.github.com>
2024-03-03 17:18:29 -08:00
mackong
b89d9fc177
langchain[patch]: add tools renderer for various non-openai agents (#18307)
- **Description:** add tools_renderer for various non-openai agents,
make tools can be render in different ways for your LLM.
  - **Issue:** N/A
  - **Dependencies:** N/A

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-03-03 14:25:12 -08:00
Aayush Kataria
7c2f3f6f95
community[minor]: Adding Azure Cosmos Mongo vCore Vector DB Cache (#16856)
Description:

This pull request introduces several enhancements for Azure Cosmos
Vector DB, primarily focused on improving caching and search
capabilities using Azure Cosmos MongoDB vCore Vector DB. Here's a
summary of the changes:

- **AzureCosmosDBSemanticCache**: Added a new cache implementation
called AzureCosmosDBSemanticCache, which utilizes Azure Cosmos MongoDB
vCore Vector DB for efficient caching of semantic data. Added
comprehensive test cases for AzureCosmosDBSemanticCache to ensure its
correctness and robustness. These tests cover various scenarios and edge
cases to validate the cache's behavior.
- **HNSW Vector Search**: Added HNSW vector search functionality in the
CosmosDB Vector Search module. This enhancement enables more efficient
and accurate vector searches by utilizing the HNSW (Hierarchical
Navigable Small World) algorithm. Added corresponding test cases to
validate the HNSW vector search functionality in both
AzureCosmosDBSemanticCache and AzureCosmosDBVectorSearch. These tests
ensure the correctness and performance of the HNSW search algorithm.
- **LLM Caching Notebook** - The notebook now includes a comprehensive
example showcasing the usage of the AzureCosmosDBSemanticCache. This
example highlights how the cache can be employed to efficiently store
and retrieve semantic data. Additionally, the example provides default
values for all parameters used within the AzureCosmosDBSemanticCache,
ensuring clarity and ease of understanding for users who are new to the
cache implementation.
 
 @hwchase17,@baskaryan, @eyurtsev,
2024-03-03 14:04:15 -08:00
Erick Friis
f96dd57501
langchain[patch]: release 0.1.10 (#18410) 2024-03-02 01:48:57 +00:00
Petteri Johansson
6c1989d292
community[minor], langchain[minor], docs: Gremlin Graph Store and QA Chain (#17683)
- **Description:** 
New feature: Gremlin graph-store and QA chain (including docs).
Compatible with Azure CosmosDB.
  - **Dependencies:** 
  no changes
2024-03-01 12:21:14 -08:00
Christophe Bornet
177f51c7bd
community: Use default load() implementation in doc loaders (#18385)
Following https://github.com/langchain-ai/langchain/pull/18289
2024-03-01 14:46:52 -05:00
Leonid Ganeline
a89f007947
docs: runnable module description (#17966)
Added a module description. Added `batch` description.
2024-03-01 10:01:32 -08:00
William FH
1deb8cadd5
Add dataset version info (#18299) 2024-02-29 22:00:44 -08:00
Bagatur
0d7fb5f60a
langchain[patch]: langchain-text-splitters dep (#18357) 2024-02-29 18:48:55 -08:00
Bagatur
5efb5c099f
text-splitters[minor], langchain[minor], community[patch], templates, docs: langchain-text-splitters 0.0.1 (#18346) 2024-02-29 18:33:21 -08:00
Hasan
15d1b73a00
Add optional output_parser param in create_react_agent (#18320)
**Description:** Add facility to pass the optional output parser to
customize the parsing logic

---------

Co-authored-by: hasan <hasan@m2sys.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-29 12:35:43 -08:00
Bagatur
a6f0506aaf
docs: query analysis use case (#17766)
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-29 12:33:49 -08:00
William FH
8af4425abd
[Evaluation] Config Fix (#18231) 2024-02-29 00:06:46 -08:00
William De Vena
23722e3653
langchain[patch]: Invoke callback prior to yielding token (#18282)
## PR title
langchain[patch]: Invoke callback prior to yielding

## PR message
Description: Invoke on_llm_new_token callback prior to yielding token in
_stream and _astream methods in langchain/tests/fake_chat_model.
Issue: https://github.com/langchain-ai/langchain/issues/16913
Dependencies: None
Twitter handle: None
2024-02-28 16:15:02 -05:00
Harrison Chase
d7c607ca00
core[minor]: move document compressor base (#17910) 2024-02-26 17:20:50 -08:00
Bagatur
1e8ab83d7b
langchain[patch], core[patch], openai[patch], fireworks[minor]: ChatFireworks.with_structured_output (#18078)
<img width="1192" alt="Screenshot 2024-02-24 at 3 39 39 PM"
src="https://github.com/langchain-ai/langchain/assets/22008038/1cf74774-a23f-4b06-9b9b-85dfa2f75b63">
2024-02-26 12:46:39 -08:00
Luan Fernandes
e867557936
[docs] Update doc-string for buffer_as_messages method in ConversationBufferWindowMemory (#18136)
minor fix stated in #18080
2024-02-26 11:46:43 -08:00
Bagatur
767523f364
core[patch], langchain[patch], templates: move openai functions parsers to core (#18060)
![Screenshot 2024-02-23 at 7 48 03
PM](https://github.com/langchain-ai/langchain/assets/22008038/e5540c4d-0020-4ece-869f-ae19db2a1f3f)
2024-02-26 11:12:53 -08:00
Nuno Campos
b1d9ce541d
Add BaseMessage.id (#17835)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-02-26 09:27:47 -08:00
Harrison Chase
935aefa8db
add run name for query constructor (#18101)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-26 08:17:05 -08:00
Mohammad Mohtashim
719a1cde75
langchain[patch]: Update doc-string for a method in ConversationBufferWindowMemory (#18090)
A minor doc fix stated in #18080
2024-02-26 10:15:02 -05:00
Simon Schmidt
2716d58603
langchain: Import from langchain_core in langchain.smith to avoid deprecation warning (#18129)
Avoids deprecation warning that triggered at import time, e.g. with
`python -c 'import langchain.smith'`


/opt/venv/lib/python3.12/site-packages/langchain/callbacks/__init__.py:37:
LangChainDeprecationWarning: Importing this callback from langchain is
deprecated. Importing it from langchain will no longer be supported as
of langchain==0.2.0. Please import from langchain-community instead:

    `from langchain_community.callbacks import base`.

To install langchain-community run `pip install -U langchain-community`.
2024-02-26 10:14:10 -05:00
Neli Hateva
a01e8473f8
community[patch]: Fix GraphSparqlQAChain so that it works with Ontotext GraphDB (#15009)
- **Description:** Introduce a new parameter `graph_kwargs` to
`RdfGraph` - parameters used to initialize the `rdflib.Graph` if
`query_endpoint` is set. Also, do not set
`rdflib.graph.DATASET_DEFAULT_GRAPH_ID` as default value for the
`rdflib.Graph` `identifier` if `query_endpoint` is set.
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** N/A
2024-02-25 19:05:21 -08:00
dokato
5afb242161
langchain[patch]: Make BooleanOutputParser more robust to non-binary responses (#17810)
- **Description:** I encountered this error when I tried to use
LLMChainFilter. Even if the message slightly differs, like `Not relevant
(NO)` this results in an error. It has been reported already here:
https://github.com/langchain-ai/langchain/issues/. This change hopefully
makes it more robust.
- **Issue:**  #11408 
- **Dependencies:** No
- **Twitter handle:** dokatox
2024-02-25 18:48:33 -08:00
Bagatur
d9e6ca2279
lanchain[patch]: Release 0.1.9 (#17999) 2024-02-22 21:45:30 -08:00
Reid Falconer
0534ba5a7d
langchain[patch]: return formatted SPARQL query on demand (#11263)
- **Description:** Added the `return_sparql_query` feature to the
`GraphSparqlQAChain` class, allowing users to get the formatted SPARQL
query along with the chain's result.
  - **Issue:** NA
  - **Dependencies:** None

Note: I've ensured that the PR passes linting and testing by running
make format, make lint, and make test locally.

I have added a test for the integration (which relies on network access)
and I have added an example to the notebook showing its use.
2024-02-22 17:03:26 -08:00
Bagatur
b0cfb86c48
langchain[minor]: openai tools structured_output_chain (#17296)
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-22 15:42:47 -08:00
Bagatur
b5f8cf9509
core[minor], openai[minor], langchain[patch]: BaseLanguageModel.with_structured_output #17302)
```python
class Foo(BaseModel):
  bar: str

structured_llm = ChatOpenAI().with_structured_output(Foo)
```

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-22 15:33:34 -08:00
Christophe Bornet
4f88a5130e
langchain[patch]: Support langchain-astradb AstraDBVectorStore in self-query retriever (#17728)
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-22 13:19:27 -08:00
Leonid Ganeline
ed0b7c3b72
docs: added community modules descriptions (#17827)
API Reference: Several `community` modules (like
[adapter](https://api.python.langchain.com/en/latest/community_api_reference.html#module-langchain_community.adapters)
module) are missing descriptions. It happens when langchain was split to
the core, langchain and community packages.
- Copied module descriptions from other packages
- Fixed several descriptions to the consistent format.
2024-02-21 16:18:36 -08:00
Zachary Toliver
29ee0496b6
community[patch]: Allow override of 'fetch_schema_from_transport' in the GraphQL tool (#17649)
- **Description:** In order to override the bool value of
"fetch_schema_from_transport" in the GraphQLAPIWrapper, a
"fetch_schema_from_transport" value needed to be added to the
"_EXTRA_OPTIONAL_TOOLS" dictionary in load_tools in the "graphql" key.
The parameter "fetch_schema_from_transport" must also be passed in to
the GraphQLAPIWrapper to allow reading of the value when creating the
client. Passing as an optional parameter is probably best to avoid
breaking changes. This change is necessary to support GraphQL instances
that do not support fetching schema, such as TigerGraph. More info here:
[TigerGraph GraphQL Schema
Docs](https://docs.tigergraph.com/graphql/current/schema)
  - **Threads handle:** @zacharytoliver

---------

Co-authored-by: Zachary Toliver <zt10191991@hotmail.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-21 15:32:43 -08:00
Nuno Campos
223e5eff14
Add JSON representation of runnable graph to serialized representation (#17745)
Sent to LangSmith

Thank you for contributing to LangChain!

Checklist:

- [ ] PR title: Please title your PR "package: description", where
"package" is whichever of langchain, community, core, experimental, etc.
is being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"
- [ ] PR message: **Delete this entire template message** and replace it
with the following bulleted list
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [ ] Pass lint and test: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified to check that you're
passing lint and testing. See contribution guidelines for more
information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
- [ ] Add tests and docs: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-02-20 14:51:09 -08:00
Nuno Campos
07ee41d284
Cache calls to create_model for get_input_schema and get_output_schema (#17755)
Thank you for contributing to LangChain!

- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core,
experimental, etc. is being modified. Use "docs: ..." for purely docs
changes, "templates: ..." for template changes, "infra: ..." for CI
changes.
  - Example: "community: add foobar LLM"


- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
    - **Description:** a description of the change
    - **Issue:** the issue # it fixes, if applicable
    - **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!


- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.


- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/

Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.
2024-02-19 13:26:42 -08:00
Bagatur
da7bca2178
langchain[patch]: bump community to 0.0.21 (#17754) 2024-02-19 12:58:32 -08:00
Bagatur
441448372d
langchain[patch]: Release 0.1.8 (#17751) 2024-02-19 11:27:37 -08:00
William FH
64743dea14
core[patch], community[patch], langchain[patch], experimental[patch], robocorp[patch]: bump LangSmith 0.1.* (#17567) 2024-02-15 23:17:59 -07:00
Christophe Bornet
387cacb881
community[minor]: Add async methods to AstraDBChatMessageHistory (#17572) 2024-02-15 09:48:42 -05:00
Mo Latif
f3e4a0e27f
langchain[patch]: Update Chain prep_inputs docstring (#17575)
**Description**: @eyurtsev Following up on #16644 to fix the docstring,
because `prep_inputs` is not longer doing any validation.
2024-02-15 09:44:35 -05:00
William FH
fc1617c44f
Update contact link (#17563) 2024-02-14 22:37:32 -08:00
Christophe Bornet
ca2d4078f3
community: Add async methods to AstraDBCache (#17415)
Adds async methods to AstraDBCache
2024-02-14 23:10:08 -05:00
Rakib Hosen
5ce1827d31
community[patch]: fix import in language parser (#17538)
- **Description:** Resolving import error in language_parser.py during
"from langchain.langchain.text_splitter import Language - **Issue:** the
issue #17536
- **Dependencies:** NO
- **Twitter handle:** @iRakibHosen

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-14 11:11:23 -08:00
shibuiwilliam
c502736841
infra: add test for ensemble retriever to ensure multiple retrievers (#8401)
Add tests to ensemble retriever to ensure it works with combination of
multiple retrievers

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-13 21:22:03 -08:00
Mo Latif
50b48a8e6a
langchain[patch]: Invoke chain prep_inputs and prep_outputs inside try block to catch validation errors (#16644)
- **Description:** Callback manager can't catch chain input or output
validation errors because `prepare_input` and `prepare_output` are not
part of the try/raise logic, this PR fixes that logic.
 
  - **Issue:** #15954
2024-02-13 22:23:11 -05:00
Christophe Bornet
a8f530bc4d
Add async methods to CacheBackedEmbeddings (#16873)
Adds async methods to CacheBackedEmbeddings
2024-02-13 22:16:27 -05:00
Bagatur
50de7a31f0
langchain[patch]: structured output chain nits (#17291) 2024-02-13 16:45:29 -08:00
JongRok BAEK
8d6cc90fc5
langchain.core : Use shallow copy for schema manipulation in JsonOutputParser.get_format_instructions (#17162)
- **Description :**  

Fix: Use shallow copy for schema manipulation in get_format_instructions

Prevents side effects on the original schema object by using a
dictionary comprehension for a safer and more controlled manipulation of
schema key-value pairs, enhancing code reliability.

  - **Issue:**  #17161 
  - **Dependencies:** None
  -  **Twitter handle:** None
2024-02-13 13:30:53 -08:00
Bagatur
de7c4b277c
langchain[patch]: Release 0.1.7 (#17482) 2024-02-13 13:13:04 -08:00
Wendy H. Chun
2df7387c91
langchain[patch]: Fix to avoid infinite loop during collapse chain in map reduce (#16253)
- **Description:** Depending on `token_max` used in
`load_summarize_chain`, it could cause an infinite loop when documents
cannot collapse under `token_max`. This change would not affect the
existing feature, but it also gives an option to users to avoid the
situation.
  - **Issue:** https://github.com/langchain-ai/langchain/issues/16251
  - **Dependencies:** None
  - **Twitter handle:** None

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-13 10:55:32 -08:00
Taha Khabouss
15baffc484
langchain[patch]: Ensure that the Elasticsearch Query Translator functions accurately w… (#17044)
Description:
Addresses a problem where the Date type within an Elasticsearch
SelfQueryRetriever would encounter difficulties in generating a valid
query.

Issue: #17042

---------

Co-authored-by: Max Jakob <max.jakob@elastic.co>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-13 10:54:24 -08:00
Bagatur
3925071dd6
langchain[patch], templates[patch]: fix multi query retriever, web re… (#17434)
…search retriever

Fixes #17352
2024-02-12 22:52:07 -08:00
mhavey
1bbb64d956
community[minor], langchian[minor]: Add Neptune Rdf graph and chain (#16650)
**Description**: This PR adds a chain for Amazon Neptune graph database
RDF format. It complements the existing Neptune Cypher chain. The PR
also includes a Neptune RDF graph class to connect to, introspect, and
query a Neptune RDF graph database from the chain. A sample notebook is
provided under docs that demonstrates the overall effect: invoking the
chain to make natural language queries against Neptune using an LLM.

**Issue**: This is a new feature
 
**Dependencies**: The RDF graph class depends on the AWS boto3 library
if using IAM authentication to connect to the Neptune database.

---------

Co-authored-by: Piyush Jain <piyushjain@duck.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-12 21:30:20 -08:00
Massimiliano Pronesti
df7cbd6fbb
community[minor]: add FlashRank ranker (#16785)
**Description:** This PR adds support for
[flashrank](https://github.com/PrithivirajDamodaran/FlashRank) for
reranking as alternative to Cohere.

I'm not sure `libs/langchain` is the right place for this change. At
first, I wanted to put it under `libs/community`. All the compressors
were under `libs/langchain/retrievers/document_compressors` though. Hope
this makes sense!
2024-02-12 20:00:52 -08:00
Theo / Taeyoon Kang
1987f905ed
core[patch]: Support .yml extension for YAML (#16783)
- **Description:**

[AS-IS] When dealing with a yaml file, the extension must be .yaml.  

[TO-BE] In the absence of extension length constraints in the OS, the
extension of the YAML file is yaml, but control over the yml extension
must still be made.

It's as if it's an error because it's a .jpg extension in jpeg support.

  - **Issue:** - 

  - **Dependencies:**
no dependencies required for this change,
2024-02-12 19:57:20 -08:00
Bagatur
22638e5927
community[patch]: give reranker default client val (#17289) 2024-02-12 17:21:53 -08:00
Tomaz Bratanic
19a1c9183d
Improve graph cypher qa prompt (#17380)
Unlike vector results, the LLM has to completely trust the context of a
graph database result, even if it doesn't provide whole context. We
tried with instructions, but it seems that adding a single example is
the way to go to solve this issue.
2024-02-11 21:15:46 -08:00
Erick Friis
3a2eb6e12b
infra: add print rule to ruff (#16221)
Added noqa for existing prints. Can slowly remove / will prevent more
being intro'd
2024-02-09 16:13:30 -08:00
Charlie Marsh
24c0bab57b
infra, multiple: Upgrade configuration for Ruff v0.2.0 (#16905)
## Summary

This PR upgrades LangChain's Ruff configuration in preparation for
Ruff's v0.2.0 release. (The changes are compatible with Ruff v0.1.5,
which LangChain uses today.) Specifically, we're now warning when
linter-only options are specified under `[tool.ruff]` instead of
`[tool.ruff.lint]`.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-09 14:28:02 -08:00
Bagatur
65e97c9b53
infra: mv SQLDatabase tests to community (#17276) 2024-02-08 17:05:43 -08:00
Bagatur
72c7af0bc0
langchain[patch]: undo redis cache import (#17275) 2024-02-08 16:39:55 -08:00
Bagatur
8bad4157ad
langchain[patch]: Release 0.1.6 (#17133) 2024-02-08 16:25:06 -08:00
Bagatur
02ef9164b5
langchain[patch]: expose cohere rerank score, add parent doc param (#16887) 2024-02-08 16:07:18 -08:00
joelsprunger
3984f6604f
langchain: adds recursive json splitter (#17144)
- **Description:** This adds a recursive json splitter class to the
existing text_splitters as well as unit tests
- **Issue:** splitting text from structured data can cause issues if you
have a large nested json object and you split it as regular text you may
end up losing the structure of the json. To mitigate against this you
can split the nested json into large chunks and overlap them, but this
causes unnecessary text processing and there will still be times where
the nested json is so big that the chunks get separated from the parent
keys.

As an example you wouldn't want the following to be split in half:
```shell
{'val0': 'DFWeNdWhapbR',
 'val1': {'val10': 'QdJo',
          'val11': 'FWSDVFHClW',
          'val12': 'bkVnXMMlTiQh',
          'val13': 'tdDMKRrOY',
          'val14': 'zybPALvL',
          'val15': 'JMzGMNH',
          'val16': {'val160': 'qLuLKusFw',
                    'val161': 'DGuotLh',
                    'val162': 'KztlcSBropT',
-----------------------------------------------------------------------split-----
                    'val163': 'YlHHDrN',
                    'val164': 'CtzsxlGBZKf',
                    'val165': 'bXzhcrWLmBFp',
                    'val166': 'zZAqC',
                    'val167': 'ZtyWno',
                    'val168': 'nQQZRsLnaBhb',
                    'val169': 'gSpMbJwA'},
          'val17': 'JhgiyF',
          'val18': 'aJaqjUSFFrI',
          'val19': 'glqNSvoyxdg'}}
```
Any llm processing the second chunk of text may not have the context of
val1, and val16 reducing accuracy. Embeddings will also lack this
context and this makes retrieval less accurate.

Instead you want it to be split into chunks that retain the json
structure.
```shell
{'val0': 'DFWeNdWhapbR',
 'val1': {'val10': 'QdJo',
          'val11': 'FWSDVFHClW',
          'val12': 'bkVnXMMlTiQh',
          'val13': 'tdDMKRrOY',
          'val14': 'zybPALvL',
          'val15': 'JMzGMNH',
          'val16': {'val160': 'qLuLKusFw',
                    'val161': 'DGuotLh',
                    'val162': 'KztlcSBropT',
                    'val163': 'YlHHDrN',
                    'val164': 'CtzsxlGBZKf'}}}
```
and
```shell
{'val1':{'val16':{
                    'val165': 'bXzhcrWLmBFp',
                    'val166': 'zZAqC',
                    'val167': 'ZtyWno',
                    'val168': 'nQQZRsLnaBhb',
                    'val169': 'gSpMbJwA'},
          'val17': 'JhgiyF',
          'val18': 'aJaqjUSFFrI',
          'val19': 'glqNSvoyxdg'}}
```
This recursive json text splitter does this. Values that contain a list
can be converted to dict first by using split(... convert_lists=True)
otherwise long lists will not be split and you may end up with chunks
larger than the max chunk.

In my testing large json objects could be split into small chunks with 
   Increased question answering accuracy
 The ability to split into smaller chunks meant retrieval queries can
use fewer tokens


- **Dependencies:** json import added to text_splitter.py, and random
added to the unit test
  - **Twitter handle:** @joelsprunger

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-08 13:45:34 -08:00
Cailin Wang
a210a8bc53
langchain[patch]: Fix create_retriever_tool missing on_retriever_end Document content (#16933)
- **Description:** In create_retriever_tool create_tool, fix
create_retriever_tool's missing Document content for on_retriever_end,
caused by create_retriever_tool's missing callbacks parameter,
  - **Twitter handle:** @CailinWang_

---------

Co-authored-by: root <root@Bluedot-AI>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-08 13:18:43 -08:00
Neli Hateva
9bb5157a3d
langchain[patch], community[patch]: Fixes in the Ontotext GraphDB Graph and QA Chain (#17239)
- **Description:** Fixes in the Ontotext GraphDB Graph and QA Chain
related to the error handling in case of invalid SPARQL queries, for
which `prepareQuery` doesn't throw an exception, but the server returns
400 and the query is indeed invalid
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** @OntotextGraphDB
2024-02-08 12:05:43 -08:00
Bagatur
852973d616
langchain[minor], core[minor]: update json, pydantic parser. add openai-json structured output runnable (#16914) 2024-02-08 11:59:06 -08:00
Eugene Yurtsev
780e84ae79
community[minor]: SQLDatabase Add fetch mode cursor, query parameters, query by selectable, expose execution options, and documentation (#17191)
- **Description:** Improve `SQLDatabase` adapter component to promote
code re-use, see
[suggestion](https://github.com/langchain-ai/langchain/pull/16246#pullrequestreview-1846590962).
  - **Needed by:** GH-16246
  - **Addressed to:** @baskaryan, @cbornet 

## Details
- Add `cursor` fetch mode
- Accept SQL query parameters
- Accept both `str` and SQLAlchemy selectables as query expression
- Expose `execution_options`
- Documentation page (notebook) about `SQLDatabase` [^1]
See [About
SQLDatabase](https://github.com/langchain-ai/langchain/blob/c1c7b763/docs/docs/integrations/tools/sql_database.ipynb).

[^1]: Apparently there hasn't been any yet?

---------

Co-authored-by: Andreas Motl <andreas.motl@crate.io>
2024-02-07 22:23:43 -05:00
Dmitry Kankalovich
f92738a6f6
langchain[minor], community[minor], core[minor]: Async Cache support and AsyncRedisCache (#15817)
* This PR adds async methods to the LLM cache. 
* Adds an implementation using Redis called AsyncRedisCache.
* Adds a docker compose file at the /docker to help spin up docker
* Updates redis tests to use a context manager so flushing always happens by default
2024-02-07 22:06:09 -05:00
Henry
2281f00198
langchain: Standardize output_parser.py across all agent types for custom FORMAT_INSTRUCTIONS (#17168)
- **Description:** 
This PR standardizes the `output_parser.py` file across all agent types
to ensure a uniform parsing mechanism is implemented. It introduces a
cohesive structure and common interface for output parsing, facilitating
easier modifications and extensions by users. The standardized approach
enhances maintainability and scalability of the codebase by providing a
consistent pattern for output parsing, which can be easily understood
and utilized across different agent types.

This PR builds upon the foundation set by a previously merged PR, which
focused exclusively on standardizing the `output_parser.py` for the
`conversational_agent` ([PR
#16945](https://github.com/langchain-ai/langchain/pull/16945)). With
this new update, I extend the standardization efforts to encompass
`output_parser.py` files across all agent types. This enhancement not
only unifies the parsing mechanism across the board but also introduces
the flexibility for users to incorporate custom `FORMAT_INSTRUCTIONS`.

  - **Issue:** 
https://github.com/langchain-ai/langchain/issues/10721
https://github.com/langchain-ai/langchain/issues/4044

  - **Dependencies:**
No new dependencies required for this change

  - **Twitter handle:**
With my github user is enough. Thanks

I hope you accept my PR.
2024-02-07 13:46:17 -08:00
Nuno Campos
65798289a4
core[minor]: Use batched tracing in sdk (#16305)
Remove threadpool executor usage in langchain tracer, this is now
handled by sdk
2024-02-07 12:10:58 -08:00
Tomaz Bratanic
302989a2b1
allow optional newline in the action responses of JSON Agent parser (#17186)
Based on my experiments, the newline isn't always there, so we can make
the regex slightly more robust by allowing an optional newline after the
bacticks
2024-02-07 10:26:14 -08:00
Erick Friis
22b6a03a28
infra: read min versions (#17135) 2024-02-06 16:05:11 -08:00
Junyoung Park
1ed73f1992
community[minor]: Add SelfQueryRetriever support to PGVector (#16991)
- **Description:** Add SelfQueryRetriever support to PGVector
  - **Issue:** -
  - **Dependencies:** -
  - **Twitter handle:** -

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-06 10:50:50 -08:00
Henry
eaeb8a5f71
langchain[patch]: output_parser.py in conversation_chat is customizable (#16945)
**Description:**
With this modification, users can customize the `FORMAT_INSTRUCTIONS`
template, allowing them to create their own prompts

As it is happening in
[this](https://github.com/langchain-ai/langchain/issues/10721) issue,
the `FORMAT_INSTRUCTIONS` is not customizable for the output parser,
unless you create your own class `ConvoOutputParser`. To avoid this, a
modification was done, creating a `format_instruction` variable that
users can customize with ease after initialize the agent.

For example:
```
agent = initialize_agent(
    agent = AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
    tools = tools,
    llm = llm_agent,
    verbose = True,
    max_iterations = 3,
    early_stopping_method = 'generate',
    memory = b_w_memory,
    handle_parsing_errors = True,
    agent_kwargs={
        'system_message':PREFIX,
        'human_message':SUFFIX,
        'template_tool_response':TEMPLATE_TOOL_RESPONSE,
        }
)
agent.agent.output_parser.format_instructions = "MY CUSTOM FORMAT INSTRUCTIONS"
print(agent.agent.output_parser.get_format_instructions())
MY CUSTOM FORMAT INSTRUCTIONS
```

Other parameters like `system_message`, `human_message`, or
`template_tool_response` are already customizable and with this PR, the
last parameter `FORMAT_INSTRUCTIONS` in
`langchain.agents.conversational_chat.prompt` can be modified.


**Issue:**
https://github.com/langchain-ai/langchain/issues/10721

**Dependencies:**
No new dependencies required for this change

**Twitter handle:**
With my github user is enough. Thanks

I hope you accept my PR.

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-02-06 09:41:53 -08:00
Jimmy Moore
912210ac19
core[patch]: fix _sql_record_manager mypy for #17048 (#17073)
- **Description:** Add relevant type annotations for relevant session
and query objects to resolve mypy errors when `# type: ignore` comments
are removed.
  - **Issue:** #17048
  - **Dependencies:** None,
  - **Twitter handle:** [clesiemo3](https://twitter.com/clesiemo3)
 
I attempted to solve the `UpsertionRecord` ignore but it would require
added a deprecated plugin or moving completely to sqlalchemy 2.0+ from
my understanding. I'm assuming this is not something desired at this
point in time.
2024-02-05 16:18:40 -08:00
T Cramer
e022bfaa7d
langchain: add partial parsing support to JsonOutputToolsParser (#17035)
- **Description:** Add partial parsing support to JsonOutputToolsParser
- **Issue:**
[16736](https://github.com/langchain-ai/langchain/issues/16736)

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-05 14:18:30 -08:00
calvinweb
dcf973c22c
Langchain: json_chat don't need stop sequenes (#16335)
This is a PR about #16334
The Stop sequenes isn't meanful in `json_chat` because it depends json
to work, not completions
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-05 14:18:16 -08:00
Harrison Chase
4eda647fdd
infra: add -p to mkdir in lint steps (#17013)
Previously, if this did not find a mypy cache then it wouldnt run

this makes it always run

adding mypy ignore comments with existing uncaught issues to unblock other prs

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
2024-02-05 11:22:06 -08:00
Christophe Bornet
2ef69fe11b
Add async methods to BaseChatMessageHistory and BaseMemory (#16728)
Adds:
   * async methods to BaseChatMessageHistory
   * async methods to ChatMessageHistory
   * async methods to BaseMemory
   * async methods to BaseChatMemory
   * async methods to ConversationBufferMemory
   * tests of ConversationBufferMemory's async methods

  **Twitter handle:** cbornet_
2024-02-05 13:20:28 -05:00
Bagatur
2a510c71a0
core[patch]: doc init positional args (#16854) 2024-02-02 10:24:16 -08:00
Bagatur
c29e9b6412
core[patch]: fix chat prompt partial messages placeholder var (#16918) 2024-02-02 10:23:37 -08:00
William FH
e02efd513f
core[patch]: Hide aliases when serializing (#16888)
Currently, if you dump an object initialized with an alias, we'll still
dump the secret values since they're retained in the kwargs
2024-02-01 17:55:37 -08:00
William FH
131c043864
Fix loading of ImagePromptTemplate (#16868)
We didn't override the namespace of the ImagePromptTemplate, so it is
listed as being in langchain.schema

This updates the mapping to let the loader deserialize.

Alternatively, we could make a slight breaking change and update the
namespace of the ImagePromptTemplate since we haven't broadly
publicized/documented it yet..
2024-02-01 17:54:04 -08:00
Leonid Ganeline
c2ca6612fe
refactor langchain.prompts.example_selector (#15369)
The `langchain.prompts.example_selector` [still holds several
artifacts](https://api.python.langchain.com/en/latest/langchain_api_reference.html#module-langchain.prompts)
that belongs to `community`. If they moved to
`langchain_community.example_selectors`, the `langchain.prompts`
namespace would be effectively removed which is great.
- moved a class and afunction to `langchain_community`

Note:
- Previously, the `langchain.prompts.example_selector` artifacts were
moved into the `langchain_core.exampe_selectors`. See the flattened
namespace (`.prompts` was removed)!
Similar flattening was implemented for the `langchain_core` as the
`langchain_core.exampe_selectors`.

---------

Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-01 12:05:57 -08:00
Qihui Xie
c5b01ac621
community[patch]: support LIKE comparator (full text match) in Qdrant (#12769)
**Description:** 
Support [Qdrant full text match
filtering](https://qdrant.tech/documentation/concepts/filtering/#full-text-match)
by adding Comparator.LIKE to QdrantTranslator.
2024-02-01 11:03:25 -08:00
Christophe Bornet
78a1af4848
langchain[patch]: Add async methods to MultiVectorRetriever (#16878)
Adds async support to multi vector retriever
2024-02-01 10:33:06 -08:00
Bagatur
9e7d9f9390
infra: bump langchain min test reqs (#16882) 2024-02-01 08:16:30 -08:00
Bagatur
db442c635b
langchain[patch]: Release 0.1.5 (#16881) 2024-02-01 08:10:29 -08:00
Christophe Bornet
a0ec045495
Add async methods to BaseStore (#16669)
- **Description:**

The BaseStore methods are currently blocking. Some implementations
(AstraDBStore, RedisStore) would benefit from having async methods.
Also once we have async methods for BaseStore, we can implement the
async `aembed_documents` in CacheBackedEmbeddings to cache the
embeddings asynchronously.

* adds async methods amget, amset, amedelete and ayield_keys to
BaseStore
  * implements the async methods for InMemoryStore
  * adds tests for InMemoryStore async methods

- **Twitter handle:** cbornet_
2024-01-31 17:10:47 -08:00
Christophe Bornet
af8c5c185b
langchain[minor],community[minor]: Add async methods in BaseLoader (#16634)
Adds:
* methods `aload()` and `alazy_load()` to interface `BaseLoader`
* implementation for class `MergedDataLoader `
* support for class `BaseLoader` in async function `aindex()` with unit
tests

Note: this is compatible with existing `aload()` methods that some
loaders already had.

**Twitter handle:** @cbornet_

---------

Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
2024-01-31 11:08:11 -08:00
Bagatur
b0347f3e2b
docs: add csv use case (#16756) 2024-01-30 09:39:46 -08:00
Neli Hateva
c95facc293
langchain[minor], community[minor]: Implement Ontotext GraphDB QA Chain (#16019)
- **Description:** Implement Ontotext GraphDB QA Chain
  - **Issue:** N/A
  - **Dependencies:** N/A
  - **Twitter handle:** @OntotextGraphDB
2024-01-29 12:25:53 -08:00
chyroc
a08f9a7ff9
langchain[patch]: support OpenAIAssistantRunnable async (#15302)
fix https://github.com/langchain-ai/langchain/issues/15299

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-29 12:19:47 -08:00
Antonio Lanza
08d3fd7f2e
langchain[patch]: inconsistent results with RecursiveCharacterTextSplitter's add_start_index=True (#16583)
This PR fixes issue #16579
2024-01-25 15:50:06 -08:00
Bagatur
5df8ab574e
infra: move indexing documentation test (#16595) 2024-01-25 14:46:50 -08:00
Bagatur
f3d61a6e47
langchain[patch]: Release 0.1.4 (#16592) 2024-01-25 14:19:18 -08:00
Bagatur
ef42d9d559
core[patch], community[patch], openai[patch]: consolidate openai tool… (#16485)
… converters

One way to convert anything to an OAI function:
convert_to_openai_function
One way to convert anything to an OAI tool: convert_to_openai_tool
Corresponding bind functions on OAI models: bind_functions, bind_tools
2024-01-25 13:18:46 -08:00
Anders Åhsman
355ef2a4a6
langchain[patch]: Fix doc-string grammar (#16543)
- **Description:** Small grammar fix in docstring for class
`BaseCombineDocumentsChain`.
2024-01-25 10:00:06 -05:00
Bagatur
c173a69908
langchain[patch]: oai tools output parser nit (#16540)
allow positional init args
2024-01-24 16:57:16 -08:00
Martin Kolb
04651f0248
community[minor]: VectorStore integration for SAP HANA Cloud Vector Engine (#16514)
- **Description:**
This PR adds a VectorStore integration for SAP HANA Cloud Vector Engine,
which is an upcoming feature in the SAP HANA Cloud database
(https://blogs.sap.com/2023/11/02/sap-hana-clouds-vector-engine-announcement/).

  - **Issue:** N/A
- **Dependencies:** [SAP HANA Python
Client](https://pypi.org/project/hdbcli/)
  - **Twitter handle:** @sapopensource

Implementation of the integration:
`libs/community/langchain_community/vectorstores/hanavector.py`

Unit tests:
`libs/community/tests/unit_tests/vectorstores/test_hanavector.py`

Integration tests:
`libs/community/tests/integration_tests/vectorstores/test_hanavector.py`

Example notebook:
`docs/docs/integrations/vectorstores/hanavector.ipynb`

Access credentials for execution of the integration tests can be
provided to the maintainers.

---------

Co-authored-by: sascha <sascha.stoll@sap.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-01-24 14:05:07 -08:00
Ali Zendegani
80fcc50c65
langchain[patch]: Minor Fix: Enable Passing custom_headers for Authentication in GraphQL Agent/Tool (#16413)
- **Description:** 

This PR aims to enhance the `langchain` library by enabling the support
for passing `custom_headers` in the `GraphQLAPIWrapper` usage within
`langchain/agents/load_tools.py`.

While the `GraphQLAPIWrapper` from the `langchain_community` module is
inherently capable of handling `custom_headers`, its current invocation
in `load_tools.py` does not facilitate this functionality.
This limitation restricts the use of the `graphql` tool with databases
or APIs that require token-based authentication.

The absence of support for `custom_headers` in this context also leads
to a lack of error messages when attempting to interact with secured
GraphQL endpoints, making debugging and troubleshooting more
challenging.

This update modifies the `load_tools` function to correctly handle
`custom_headers`, thereby allowing secure and authenticated access to
GraphQL services requiring tokens.

Example usage after the proposed change:
```python
tools = load_tools(
    ["graphql"],
    graphql_endpoint="https://your-graphql-endpoint.com/graphql",
    custom_headers={"Authorization": f"Token {api_token}"},
)
```
  - **Issue:** None,
  - **Dependencies:** None,
  - **Twitter handle:** None
2024-01-23 19:19:53 -08:00
Krista Pratico
0e2e7d8b83
langchain[patch]: allow passing client with OpenAIAssistantRunnable (#16486)
- **Description:** This addresses the issue tagged below where if you
try to pass your own client when creating an OpenAI assistant, a
pydantic error is raised:

Example code:

```python
import openai
from langchain.agents.openai_assistant import OpenAIAssistantRunnable

client = openai.OpenAI()
interpreter_assistant = OpenAIAssistantRunnable.create_assistant(
    name="langchain assistant",
    instructions="You are a personal math tutor. Write and run code to answer math questions.",
    tools=[{"type": "code_interpreter"}],
    model="gpt-4-1106-preview",
    client=client
)

```

Error:
`pydantic.v1.errors.ConfigError: field "client" not yet prepared, so the
type is still a ForwardRef. You might need to call
OpenAIAssistantRunnable.update_forward_refs()`

It additionally updates type hints and docstrings to indicate that an
AzureOpenAI client is permissible as well.

  - **Issue:** https://github.com/langchain-ai/langchain/issues/15948
  - **Dependencies:** N/A
2024-01-23 18:48:29 -08:00
Gianfranco Demarco
c69f599594
langchain[patch]: Extract _aperform_agent_action from _aiter_next_step from AgentExecutor (#15707)
- **Description:** extreact the _aperform_agent_action in the
AgentExecutor class to allow for easier overriding. Extracted logic from
_iter_next_step into a new method _perform_agent_action for consistency
and easier overriding.
- **Issue:** #15706

Closes #15706
2024-01-23 18:22:09 -08:00
i-w-a
95ee69a301
langchain[patch]: In HTMLHeaderTextSplitter set default encoding to utf-8 (#16372)
- **Description:** The HTMLHeaderTextSplitter Class now explicitly
specifies utf-8 encoding in the part of the split_text_from_file method
that calls the HTMLParser.
- **Issue:** Prevent garbled characters due to differences in encoding
of html files (except for English in particular, I noticed that problem
with Japanese).
  - **Dependencies:** No dependencies,
  - **Twitter handle:**  @i_w__a
2024-01-23 18:20:29 -08:00
Bagatur
ba326b98d0
langchain[patch]: Release 0.1.3 (#16475) 2024-01-23 11:50:25 -08:00
Erick Friis
cfe95ab085
multiple: update langsmith dep (#16407) 2024-01-22 14:23:11 -07:00