Commit Graph

1634 Commits

Author SHA1 Message Date
Leonid Kuligin
04d134df17 marked MatchingEngine as deprecated (#18585)
Thank you for contributing to LangChain!

- [ ] **PR title**: "community: deprecate vectorstores.MatchingEngine"


- [ ] **PR message**: 
- **Description:** announced a deprecation since this integration has
been moved to langchain_google_vertexai
2024-03-05 09:34:53 -08:00
Erick Friis
343438e872 community[patch]: deprecate community fireworks (#18544) 2024-03-05 01:04:26 +00:00
Scott Nath
b051bba1a9 community: Add you.com tool, add async to retriever, add async testing, add You tool doc (#18032)
- **Description:** finishes adding the you.com functionality including:
    - add async functions to utility and retriever
    - add the You.com Tool
    - add async testing for utility, retriever, and tool
    - add a tool integration notebook page
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** @scottnath
2024-03-03 14:30:05 -08:00
William De Vena
275877980e community[patch]: Invoke callback prior to yielding token (#18447)
## PR title
community[patch]: Invoke callback prior to yielding token

## PR message
Description: Invoke callback prior to yielding token in _stream method
in llms/vertexai.
Issue: https://github.com/langchain-ai/langchain/issues/16913
Dependencies: None
2024-03-03 14:14:40 -08:00
William De Vena
67375e96e0 community[patch]: Invoke callback prior to yielding token (#18448)
## PR title
community[patch]: Invoke callback prior to yielding token

## PR message
- Description: Invoke callback prior to yielding token in _stream method
in llms/tongyi.
- Issue: https://github.com/langchain-ai/langchain/issues/16913
- Dependencies: None
2024-03-03 14:14:22 -08:00
William De Vena
2087cbae64 community[patch]: Invoke callback prior to yielding token (#18449)
## PR title
community[patch]: Invoke callback prior to yielding token

## PR message
- Description: Invoke callback prior to yielding token in _stream method
in chat_models/perplexity.
- Issue: https://github.com/langchain-ai/langchain/issues/16913
- Dependencies: None
2024-03-03 14:14:00 -08:00
William De Vena
eb04d0d3e2 community[patch]: Invoke callback prior to yielding token (#18452)
## PR title
community[patch]: Invoke callback prior to yielding token

## PR message
- Description: Invoke callback prior to yielding token in _stream and
_astream methods in llms/anthropic.
- Issue: https://github.com/langchain-ai/langchain/issues/16913
- Dependencies: None
2024-03-03 14:13:41 -08:00
William De Vena
371bec79bc community[patch]: Invoke callback prior to yielding token (#18454)
## PR title
community[patch]: Invoke callback prior to yielding token

## PR message
- Description: Invoke callback prior to yielding token in _stream and
_astream methods in llms/baidu_qianfan_endpoint.
- Issue: https://github.com/langchain-ai/langchain/issues/16913
- Dependencies: None
2024-03-03 14:13:22 -08:00
Aayush Kataria
7c2f3f6f95 community[minor]: Adding Azure Cosmos Mongo vCore Vector DB Cache (#16856)
Description:

This pull request introduces several enhancements for Azure Cosmos
Vector DB, primarily focused on improving caching and search
capabilities using Azure Cosmos MongoDB vCore Vector DB. Here's a
summary of the changes:

- **AzureCosmosDBSemanticCache**: Added a new cache implementation
called AzureCosmosDBSemanticCache, which utilizes Azure Cosmos MongoDB
vCore Vector DB for efficient caching of semantic data. Added
comprehensive test cases for AzureCosmosDBSemanticCache to ensure its
correctness and robustness. These tests cover various scenarios and edge
cases to validate the cache's behavior.
- **HNSW Vector Search**: Added HNSW vector search functionality in the
CosmosDB Vector Search module. This enhancement enables more efficient
and accurate vector searches by utilizing the HNSW (Hierarchical
Navigable Small World) algorithm. Added corresponding test cases to
validate the HNSW vector search functionality in both
AzureCosmosDBSemanticCache and AzureCosmosDBVectorSearch. These tests
ensure the correctness and performance of the HNSW search algorithm.
- **LLM Caching Notebook** - The notebook now includes a comprehensive
example showcasing the usage of the AzureCosmosDBSemanticCache. This
example highlights how the cache can be employed to efficiently store
and retrieve semantic data. Additionally, the example provides default
values for all parameters used within the AzureCosmosDBSemanticCache,
ensuring clarity and ease of understanding for users who are new to the
cache implementation.
 
 @hwchase17,@baskaryan, @eyurtsev,
2024-03-03 14:04:15 -08:00
Sourav Pradhan
50abeb7ed9 community[patch]: fix Chroma add_images (#17964)
###  Description

Fixed a small bug in chroma.py add_images(), previously whenever we are
not passing metadata the documents is containing the base64 of the uris
passed, but when we are passing the metadata the documents is containing
normal string uris which should not be the case.

### Issue

In add_images() method when we are calling upsert() we have to use
"b64_texts" instead of normal string "uris".

### Twitter handle

https://twitter.com/whitepegasus01
2024-03-01 21:55:58 +00:00
Kate Silverstein
b7c71e2e07 community[minor]: llamafile embeddings support (#17976)
* **Description:** adds `LlamafileEmbeddings` class implementation for
generating embeddings using
[llamafile](https://github.com/Mozilla-Ocho/llamafile)-based models.
Includes related unit tests and notebook showing example usage.
* **Issue:** N/A
* **Dependencies:** N/A
2024-03-01 13:49:18 -08:00
Tomaz Bratanic
f6bfb969ba community[patch]: Add an option for indexed generic label when import neo4j graph documents (#18122)
Current implementation doesn't have an indexed property that would
optimize the import. I have added a `baseEntityLabel` parameter that
allows you to add a secondary node label, which has an indexed id
`property`. By default, the behaviour is identical to previous version.

Since multi-labeled nodes are terrible for text2cypher, I removed the
secondary label from schema representation object and string, which is
used in text2cypher.
2024-03-01 12:33:52 -08:00
Arun Sathiya
4adac20d7b community[patch]: Make cohere_api_key a SecretStr (#12188)
This PR makes `cohere_api_key` in `llms/cohere` a SecretStr, so that the
API Key is not leaked when `Cohere.cohere_api_key` is represented as a
string.

---------

Signed-off-by: Arun <arun@arun.blog>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-03-01 20:27:53 +00:00
Petteri Johansson
6c1989d292 community[minor], langchain[minor], docs: Gremlin Graph Store and QA Chain (#17683)
- **Description:** 
New feature: Gremlin graph-store and QA chain (including docs).
Compatible with Azure CosmosDB.
  - **Dependencies:** 
  no changes
2024-03-01 12:21:14 -08:00
Ather Fawaz
a5ccf5d33c community[minor]: Add support for Perplexity chat model(#17024)
- **Description:** This PR adds support for [Perplexity AI
APIs](https://blog.perplexity.ai/blog/introducing-pplx-api).
  - **Issues:** None
  - **Dependencies:** None
  - **Twitter handle:** [@atherfawaz](https://twitter.com/AtherFawaz)

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-03-01 12:19:23 -08:00
Rodrigo Nogueira
3438d2cbcc community[minor]: add maritalk chat (#17675)
**Description:** Adds the MariTalk chat that is based on a LLM specially
trained for Portuguese.

**Twitter handle:** @MaritacaAI
2024-03-01 12:18:23 -08:00
sarahberenji
08fa38d56d community[patch]: the syntax error for Redis generated query (#17717)
To fix the reported error:
https://github.com/langchain-ai/langchain/discussions/17397
2024-03-01 12:18:10 -08:00
certified-dodo
43e3244573 community[patch]: Fix MongoDBAtlasVectorSearch max_marginal_relevance_search (#17971)
Description:
* `self._embedding_key` is accessed after deletion, breaking
`max_marginal_relevance_search` search
* Introduced in:
e135e5257c
* Updated but still persists in:
ce22e10c4b

Issue: https://github.com/langchain-ai/langchain/issues/17963

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-01 12:17:42 -08:00
Nikita Titov
9f2ab37162 community[patch]: don't try to parse json in case of errored response (#18317)
Related issue: #13896.

In case Ollama is behind a proxy, proxy error responses cannot be
viewed. You aren't even able to check response code.

For example, if your Ollama has basic access authentication and it's not
passed, `JSONDecodeError` will overwrite the truth response error.

<details>
<summary><b>Log now:</b></summary>

```
{
	"name": "JSONDecodeError",
	"message": "Expecting value: line 1 column 1 (char 0)",
	"stack": "---------------------------------------------------------------------------
JSONDecodeError                           Traceback (most recent call last)
File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/requests/models.py:971, in Response.json(self, **kwargs)
    970 try:
--> 971     return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError

File /opt/miniforge3/envs/.gpt/lib/python3.10/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
    343 if (cls is None and object_hook is None and
    344         parse_int is None and parse_float is None and
    345         parse_constant is None and object_pairs_hook is None and not kw):
--> 346     return _default_decoder.decode(s)
    347 if cls is None:

File /opt/miniforge3/envs/.gpt/lib/python3.10/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
    333 \"\"\"Return the Python representation of ``s`` (a ``str`` instance
    334 containing a JSON document).
    335 
    336 \"\"\"
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    338 end = _w(s, end).end()

File /opt/miniforge3/envs/.gpt/lib/python3.10/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
    354 except StopIteration as err:
--> 355     raise JSONDecodeError(\"Expecting value\", s, err.value) from None
    356 return obj, end

JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

JSONDecodeError                           Traceback (most recent call last)
Cell In[3], line 1
----> 1 print(translate_func().invoke('text'))

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/runnables/base.py:2053, in RunnableSequence.invoke(self, input, config)
   2051 try:
   2052     for i, step in enumerate(self.steps):
-> 2053         input = step.invoke(
   2054             input,
   2055             # mark each step as a child run
   2056             patch_config(
   2057                 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
   2058             ),
   2059         )
   2060 # finish the root run
   2061 except BaseException as e:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:165, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    154 def invoke(
    155     self,
    156     input: LanguageModelInput,
   (...)
    160     **kwargs: Any,
    161 ) -> BaseMessage:
    162     config = ensure_config(config)
    163     return cast(
    164         ChatGeneration,
--> 165         self.generate_prompt(
    166             [self._convert_input(input)],
    167             stop=stop,
    168             callbacks=config.get(\"callbacks\"),
    169             tags=config.get(\"tags\"),
    170             metadata=config.get(\"metadata\"),
    171             run_name=config.get(\"run_name\"),
    172             **kwargs,
    173         ).generations[0][0],
    174     ).message

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:543, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    535 def generate_prompt(
    536     self,
    537     prompts: List[PromptValue],
   (...)
    540     **kwargs: Any,
    541 ) -> LLMResult:
    542     prompt_messages = [p.to_messages() for p in prompts]
--> 543     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:407, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    405         if run_managers:
    406             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 407         raise e
    408 flattened_outputs = [
    409     LLMResult(generations=[res.generations], llm_output=res.llm_output)
    410     for res in results
    411 ]
    412 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:397, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    394 for i, m in enumerate(messages):
    395     try:
    396         results.append(
--> 397             self._generate_with_cache(
    398                 m,
    399                 stop=stop,
    400                 run_manager=run_managers[i] if run_managers else None,
    401                 **kwargs,
    402             )
    403         )
    404     except BaseException as e:
    405         if run_managers:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:576, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    572     raise ValueError(
    573         \"Asked to cache, but no cache found at `langchain.cache`.\"
    574     )
    575 if new_arg_supported:
--> 576     return self._generate(
    577         messages, stop=stop, run_manager=run_manager, **kwargs
    578     )
    579 else:
    580     return self._generate(messages, stop=stop, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:250, in ChatOllama._generate(self, messages, stop, run_manager, **kwargs)
    226 def _generate(
    227     self,
    228     messages: List[BaseMessage],
   (...)
    231     **kwargs: Any,
    232 ) -> ChatResult:
    233     \"\"\"Call out to Ollama's generate endpoint.
    234 
    235     Args:
   (...)
    247             ])
    248     \"\"\"
--> 250     final_chunk = self._chat_stream_with_aggregation(
    251         messages,
    252         stop=stop,
    253         run_manager=run_manager,
    254         verbose=self.verbose,
    255         **kwargs,
    256     )
    257     chat_generation = ChatGeneration(
    258         message=AIMessage(content=final_chunk.text),
    259         generation_info=final_chunk.generation_info,
    260     )
    261     return ChatResult(generations=[chat_generation])

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:183, in ChatOllama._chat_stream_with_aggregation(self, messages, stop, run_manager, verbose, **kwargs)
    174 def _chat_stream_with_aggregation(
    175     self,
    176     messages: List[BaseMessage],
   (...)
    180     **kwargs: Any,
    181 ) -> ChatGenerationChunk:
    182     final_chunk: Optional[ChatGenerationChunk] = None
--> 183     for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
    184         if stream_resp:
    185             chunk = _chat_stream_response_to_chat_generation_chunk(stream_resp)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:156, in ChatOllama._create_chat_stream(self, messages, stop, **kwargs)
    147 def _create_chat_stream(
    148     self,
    149     messages: List[BaseMessage],
    150     stop: Optional[List[str]] = None,
    151     **kwargs: Any,
    152 ) -> Iterator[str]:
    153     payload = {
    154         \"messages\": self._convert_messages_to_ollama_messages(messages),
    155     }
--> 156     yield from self._create_stream(
    157         payload=payload, stop=stop, api_url=f\"{self.base_url}/api/chat/\", **kwargs
    158     )

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/llms/ollama.py:234, in _OllamaCommon._create_stream(self, api_url, payload, stop, **kwargs)
    228         raise OllamaEndpointNotFoundError(
    229             \"Ollama call failed with status code 404. \"
    230             \"Maybe your model is not found \"
    231             f\"and you should pull the model with `ollama pull {self.model}`.\"
    232         )
    233     else:
--> 234         optional_detail = response.json().get(\"error\")
    235         raise ValueError(
    236             f\"Ollama call failed with status code {response.status_code}.\"
    237             f\" Details: {optional_detail}\"
    238         )
    239 return response.iter_lines(decode_unicode=True)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/requests/models.py:975, in Response.json(self, **kwargs)
    971     return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
--> 975     raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)

JSONDecodeError: Expecting value: line 1 column 1 (char 0)"
}
```

</details>


<details>

<summary><b>Log after a fix:</b></summary>

```
{
	"name": "ValueError",
	"message": "Ollama call failed with status code 401. Details: <html>\r
<head><title>401 Authorization Required</title></head>\r
<body>\r
<center><h1>401 Authorization Required</h1></center>\r
<hr><center>nginx/1.18.0 (Ubuntu)</center>\r
</body>\r
</html>\r
",
	"stack": "---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[2], line 1
----> 1 print(translate_func().invoke('text'))

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/runnables/base.py:2053, in RunnableSequence.invoke(self, input, config)
   2051 try:
   2052     for i, step in enumerate(self.steps):
-> 2053         input = step.invoke(
   2054             input,
   2055             # mark each step as a child run
   2056             patch_config(
   2057                 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
   2058             ),
   2059         )
   2060 # finish the root run
   2061 except BaseException as e:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:165, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    154 def invoke(
    155     self,
    156     input: LanguageModelInput,
   (...)
    160     **kwargs: Any,
    161 ) -> BaseMessage:
    162     config = ensure_config(config)
    163     return cast(
    164         ChatGeneration,
--> 165         self.generate_prompt(
    166             [self._convert_input(input)],
    167             stop=stop,
    168             callbacks=config.get(\"callbacks\"),
    169             tags=config.get(\"tags\"),
    170             metadata=config.get(\"metadata\"),
    171             run_name=config.get(\"run_name\"),
    172             **kwargs,
    173         ).generations[0][0],
    174     ).message

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:543, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    535 def generate_prompt(
    536     self,
    537     prompts: List[PromptValue],
   (...)
    540     **kwargs: Any,
    541 ) -> LLMResult:
    542     prompt_messages = [p.to_messages() for p in prompts]
--> 543     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:407, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    405         if run_managers:
    406             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 407         raise e
    408 flattened_outputs = [
    409     LLMResult(generations=[res.generations], llm_output=res.llm_output)
    410     for res in results
    411 ]
    412 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:397, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
    394 for i, m in enumerate(messages):
    395     try:
    396         results.append(
--> 397             self._generate_with_cache(
    398                 m,
    399                 stop=stop,
    400                 run_manager=run_managers[i] if run_managers else None,
    401                 **kwargs,
    402             )
    403         )
    404     except BaseException as e:
    405         if run_managers:

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:576, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    572     raise ValueError(
    573         \"Asked to cache, but no cache found at `langchain.cache`.\"
    574     )
    575 if new_arg_supported:
--> 576     return self._generate(
    577         messages, stop=stop, run_manager=run_manager, **kwargs
    578     )
    579 else:
    580     return self._generate(messages, stop=stop, **kwargs)

File /opt/miniforge3/envs/.gpt/lib/python3.10/site-packages/langchain_community/chat_models/ollama.py:250, in ChatOllama._generate(self, messages, stop, run_manager, **kwargs)
    226 def _generate(
    227     self,
    228     messages: List[BaseMessage],
   (...)
    231     **kwargs: Any,
    232 ) -> ChatResult:
    233     \"\"\"Call out to Ollama's generate endpoint.
    234 
    235     Args:
   (...)
    247             ])
    248     \"\"\"
--> 250     final_chunk = self._chat_stream_with_aggregation(
    251         messages,
    252         stop=stop,
    253         run_manager=run_manager,
    254         verbose=self.verbose,
    255         **kwargs,
    256     )
    257     chat_generation = ChatGeneration(
    258         message=AIMessage(content=final_chunk.text),
    259         generation_info=final_chunk.generation_info,
    260     )
    261     return ChatResult(generations=[chat_generation])

File /storage/gpt-project/Repos/repo_nikita/gpt_lib/langchain/ollama.py:328, in ChatOllamaCustom._chat_stream_with_aggregation(self, messages, stop, run_manager, verbose, **kwargs)
    319 def _chat_stream_with_aggregation(
    320     self,
    321     messages: List[BaseMessage],
   (...)
    325     **kwargs: Any,
    326 ) -> ChatGenerationChunk:
    327     final_chunk: Optional[ChatGenerationChunk] = None
--> 328     for stream_resp in self._create_chat_stream(messages, stop, **kwargs):
    329         if stream_resp:
    330             chunk = _chat_stream_response_to_chat_generation_chunk(stream_resp)

File /storage/gpt-project/Repos/repo_nikita/gpt_lib/langchain/ollama.py:301, in ChatOllamaCustom._create_chat_stream(self, messages, stop, **kwargs)
    292 def _create_chat_stream(
    293     self,
    294     messages: List[BaseMessage],
    295     stop: Optional[List[str]] = None,
    296     **kwargs: Any,
    297 ) -> Iterator[str]:
    298     payload = {
    299         \"messages\": self._convert_messages_to_ollama_messages(messages),
    300     }
--> 301     yield from self._create_stream(
    302         payload=payload, stop=stop, api_url=f\"{self.base_url}/api/chat\", **kwargs
    303     )

File /storage/gpt-project/Repos/repo_nikita/gpt_lib/langchain/ollama.py:134, in _OllamaCommonCustom._create_stream(self, api_url, payload, stop, **kwargs)
    132     else:
    133         optional_detail = response.text
--> 134         raise ValueError(
    135             f\"Ollama call failed with status code {response.status_code}.\"
    136             f\" Details: {optional_detail}\"
    137         )
    138 return response.iter_lines(decode_unicode=True)

ValueError: Ollama call failed with status code 401. Details: <html>\r
<head><title>401 Authorization Required</title></head>\r
<body>\r
<center><h1>401 Authorization Required</h1></center>\r
<hr><center>nginx/1.18.0 (Ubuntu)</center>\r
</body>\r
</html>\r
"
}
```

</details>

The same is true for timeout errors or when you simply mistyped in
`base_url` arg and get response from some other service, for instance.

Real Ollama errors are still clearly readable:

```
ValueError: Ollama call failed with status code 400. Details: {"error":"invalid options: unknown_option"}
```

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-01 12:17:29 -08:00
Yudhajit Sinha
e2b901c35b community[patch]: chat message histrory mypy fix (#18250)
Description: Fixed type: ignore's for mypy for
chat_message_histories(streamlit)
Adresses #17048 

Planning to add more based on reviews
2024-03-01 12:17:18 -08:00
Hemslo Wang
58a2abf089 community[patch]: fix RecursiveUrlLoader metadata_extractor return type (#18193)
**Description:** Fix `metadata_extractor` type for `RecursiveUrlLoader`,
the default `_metadata_extractor` returns `dict` instead of `str`.
**Issue:** N/A
**Dependencies:** N/A
**Twitter handle:** N/A

Signed-off-by: Hemslo Wang <hemslo.wang@gmail.com>
2024-03-01 12:08:20 -08:00
Maxime Perrin
98380cff9b community[patch]: removing "response_mode" parameter in llama_index retriever (#18180)
- **Description:** Removing this line 
```python
response = index.query(query, response_mode="no_text", **self.query_kwargs)
```
to 
```python
response = index.query(query, **self.query_kwargs)
```
Since llama index query does not support response_mode anymore : ``` |
TypeError: BaseQueryEngine.query() got an unexpected keyword argument
'response_mode'````
  - **Twitter handle:** @maximeperrin_

---------

Co-authored-by: Maxime Perrin <mperrin@doing.fr>
2024-03-01 12:05:09 -08:00
Christophe Bornet
177f51c7bd community: Use default load() implementation in doc loaders (#18385)
Following https://github.com/langchain-ai/langchain/pull/18289
2024-03-01 14:46:52 -05:00
mwmajewsk
e192f6b6eb community[patch]: fix, better error message in deeplake vectoriser (#18397)
If the document loader recieves Pathlib path instead of str, it reads
the file correctly, but the problem begins when the document is added to
Deeplake.
This problem arises from casting the path to str in the metadata.

```python
deeplake = True
fname = Path('./lorem_ipsum.txt')
loader = TextLoader(fname, encoding="utf-8")
docs = loader.load_and_split()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
chunks= text_splitter.split_documents(docs)
if deeplake:
    db = DeepLake(dataset_path=ds_path, embedding=embeddings, token=activeloop_token)
    db.add_documents(chunks)
else:
    db = Chroma.from_documents(docs, embeddings)
```

So using this snippet of code the error message for deeplake looks like
this:

```
[part of error message omitted]

Traceback (most recent call last):
  File "/home/mwm/repositories/sources/fixing_langchain/main.py", line 53, in <module>
    db.add_documents(chunks)
  File "/home/mwm/repositories/sources/langchain/libs/core/langchain_core/vectorstores.py", line 139, in add_documents
    return self.add_texts(texts, metadatas, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mwm/repositories/sources/langchain/libs/community/langchain_community/vectorstores/deeplake.py", line 258, in add_texts
    return self.vectorstore.add(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/vectorstore/deeplake_vectorstore.py", line 226, in add
    return self.dataset_handler.add(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/vectorstore/dataset_handlers/client_side_dataset_handler.py", line 139, in add
    dataset_utils.extend_or_ingest_dataset(
  File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/vectorstore/vector_search/dataset/dataset.py", line 544, in extend_or_ingest_dataset
    extend(
  File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/vectorstore/vector_search/dataset/dataset.py", line 505, in extend
    dataset.extend(batched_processed_tensors, progressbar=False)
  File "/home/mwm/anaconda3/envs/langchain/lib/python3.11/site-packages/deeplake/core/dataset/dataset.py", line 3247, in extend
    raise SampleExtendError(str(e)) from e.__cause__
deeplake.util.exceptions.SampleExtendError: Failed to append a sample to the tensor 'metadata'. See more details in the traceback. If you wish to skip the samples that cause errors, please specify `ignore_errors=True`.
```

Which is does not explain the error well enough.
The same error for chroma looks like this 

```
During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/mwm/repositories/sources/fixing_langchain/main.py", line 56, in <module>
    db = Chroma.from_documents(docs, embeddings)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/mwm/repositories/sources/langchain/libs/community/langchain_community/vectorstores/chroma.py", line 778, in from_documents
    return cls.from_texts(
           ^^^^^^^^^^^^^^^
  File "/home/mwm/repositories/sources/langchain/libs/community/langchain_community/vectorstores/chroma.py", line 736, in from_texts
    chroma_collection.add_texts(
  File "/home/mwm/repositories/sources/langchain/libs/community/langchain_community/vectorstores/chroma.py", line 309, in add_texts
    raise ValueError(e.args[0] + "\n\n" + msg)
ValueError: Expected metadata value to be a str, int, float or bool, got lorem_ipsum.txt which is a <class 'pathlib.PosixPath'>

Try filtering complex metadata from the document using langchain_community.vectorstores.utils.filter_complex_metadata.
```

Which is way more user friendly, so I just added information about
possible mismatch of the type in the error message, the same way it is
covered in chroma
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/chroma.py#L224
2024-03-01 11:21:21 -08:00
Daniel Chico
7d962278f6 community[patch]: type ignore fixes (#18395)
Related to #17048
2024-03-01 11:21:02 -08:00
Christophe Bornet
69be82c86d community[patch]: Implement lazy_load() for CSVLoader (#18391)
Covered by `test_csv_loader.py`
2024-03-01 11:17:08 -08:00
Guangdong Liu
760a16ff32 community[patch]: Fix ChatModel for sparkllm Bug. (#18375)
**PR message**: ***Delete this entire checklist*** and replace with
    - **Description:** fix sparkllm paramer error
    - **Issue:**   close #18370
- **Dependencies:** change `IFLYTEK_SPARK_APP_URL` to
`IFLYTEK_SPARK_API_URL`
    - **Twitter handle:** No
2024-03-01 10:49:30 -08:00
Yujie Qian
cbb65741a7 community[patch]: Voyage AI updates default model and batch size (#17655)
- **Description:** update the default model and batch size in
VoyageEmbeddings
    - **Issue:** N/A
    - **Dependencies:** N/A
    - **Twitter handle:** N/A

---------

Co-authored-by: fodizoltan <zoltan@conway.expert>
2024-03-01 10:22:24 -08:00
Shengsheng Huang
ae471a7dcb community[minor]: add BigDL-LLM integrations (#17953)
- **Description**:
[`bigdl-llm`](https://github.com/intel-analytics/BigDL) is a library for
running LLM on Intel XPU (from Laptop to GPU to Cloud) using
INT4/FP4/INT8/FP8 with very low latency (for any PyTorch model). This PR
adds bigdl-llm integrations to langchain.
- **Issue**: NA
- **Dependencies**: `bigdl-llm` library
- **Contribution maintainer**: @shane-huang 
 
Examples added:
- docs/docs/integrations/llms/bigdl.ipynb
2024-03-01 10:04:53 -08:00
Ethan Yang
f61cb8d407 community[minor]: Add openvino backend support (#11591)
- **Description:** add openvino backend support by HuggingFace Optimum
Intel,
  - **Dependencies:** “optimum[openvino]”,

---------

Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-03-01 10:04:24 -08:00
RadhikaBansal97
8bafd2df5e community[patch]: Change github endpoint in GithubLoader (#17622)
Description- 
- Changed the GitHub endpoint as existing was not working and giving 404
not found error
- Also the existing function was failing if file_filter is not passed as
the tree api return all paths including directory as well, and when
get_file_content was iterating over these path, the function was failing
for directory as the api was returning list of files inside the
directory, so added a condition to ignore the paths if it a directory
- Fixes this issue -
https://github.com/langchain-ai/langchain/issues/17453

Co-authored-by: Radhika Bansal <Radhika.Bansal@veritas.com>
2024-03-01 09:36:31 -08:00
Anush
9d663f31fa community[patch]: FastEmbed to latest (#18040)
## Description

Updates the `langchain_community.embeddings.fastembed` provider as per
the recent updates to [`FastEmbed`](https://github.com/qdrant/fastembed)
library.
2024-02-29 21:15:51 -08:00
Eugene Yurtsev
51b661cfe8 community[patch]: BaseLoader load method should just delegate to lazy_load (#18289)
load() should just reference lazy_load()
2024-02-29 21:45:28 -05:00
Bagatur
5efb5c099f text-splitters[minor], langchain[minor], community[patch], templates, docs: langchain-text-splitters 0.0.1 (#18346) 2024-02-29 18:33:21 -08:00
Erick Friis
eefb49680f multiple[patch]: fix deprecation versions (#18349) 2024-02-29 16:58:33 -08:00
Jib
72bfc1d3db mongodb[minor]: MongoDB Partner Package -- Porting MongoDBAtlasVectorSearch (#17652)
This PR migrates the existing MongoDBAtlasVectorSearch abstraction from
the `langchain_community` section to the partners package section of the
codebase.
- [x] Run the partner package script as advised in the partner-packages
documentation.
- [x] Add Unit Tests
- [x] Migrate Integration Tests
- [x] Refactor `MongoDBAtlasVectorStore` (autogenerated) to
`MongoDBAtlasVectorSearch`
- [x] ~Remove~ deprecate the old `langchain_community` VectorStore
references.

## Additional Callouts
- Implemented the `delete` method
- Included any missing async function implementations
  - `amax_marginal_relevance_search_by_vector`
  - `adelete` 
- Added new Unit Tests that test for functionality of
`MongoDBVectorSearch` methods
- Removed [`del
res[self._embedding_key]`](e0c81e1cb0/libs/community/langchain_community/vectorstores/mongodb_atlas.py (L218))
in `_similarity_search_with_score` function as it would make the
`maximal_marginal_relevance` function fail otherwise. The `Document`
needs to store the embedding key in metadata to work.

Checklist:

- [x] PR title: Please title your PR "package: description", where
"package" is whichever of langchain, community, core, experimental, etc.
is being modified. Use "docs: ..." for purely docs changes, "templates:
..." for template changes, "infra: ..." for CI changes.
  - Example: "community: add foobar LLM"
- [x] PR message
- [x] Pass lint and test: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified to check that you're
passing lint and testing. See contribution guidelines for more
information on how to write/run tests, lint, etc:
https://python.langchain.com/docs/contributing/
- [x] Add tests and docs: If you're adding a new integration, please
include
1. Existing tests supplied in docs/docs do not change. Updated
docstrings for new functions like `delete`
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory. (This already exists)

If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, hwchase17.

---------

Co-authored-by: Steven Silvester <steven.silvester@ieee.org>
Co-authored-by: Erick Friis <erick@langchain.dev>
2024-02-29 23:09:48 +00:00
Kai Kugler
df234fb171 community[patch]: Fixing embedchain document mapping (#18255)
- **Description:** The current embedchain implementation seems to handle
document metadata differently than done in the current implementation of
langchain and a KeyError is thrown. I would love for someone else to
test this...

---------

Co-authored-by: KKUGLER <kai.kugler@mercedes-benz.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
Co-authored-by: Deshraj Yadav <deshraj@gatech.edu>
2024-02-29 14:54:37 -08:00
Tomaz Bratanic
5999c4a240 Add support for parameters in neo4j retrieval query (#18310)
Sometimes, you want to use various parameters in the retrieval query of
Neo4j Vector to personalize/customize results. Before, when there were
only predefined chains, it didn't really make sense. Now that it's all
about custom chains and LCEL, it is worth adding since users can inject
any params they wish at query time. Isn't prone to SQL injection-type
attacks since we use parameters and not concatenating strings.
2024-02-29 13:00:54 -08:00
Virat Singh
cd926ac3dd community: Add PolygonFinancials Tool (#18324)
**Description:**
In this PR, I am adding a `PolygonFinancials` tool, which can be used to
get financials data for a given ticker. The financials data is the
fundamental data that is found in income statements, balance sheets, and
cash flow statements of public US companies.

**Twitter**: 
[@virattt](https://twitter.com/virattt)
2024-02-29 10:56:05 -08:00
Christophe Bornet
8a81fcd5d3 community: Fix deprecation version of AstraDB VectorStore (#17991) 2024-02-28 17:15:09 -05:00
mackong
2c42f3a955 ollama[patch]: delete suffix slash to avoid redirect (#18260)
- **Description:** see
[ollama](https://github.com/ollama/ollama/blob/main/server/routes.go#L949)'s
route definitions
- **Issue:** N/A
- **Dependencies:** N/A
2024-02-28 16:44:48 -05:00
William De Vena
6b58943917 community[patch]: Invoke callback prior to yielding token (#18288)
## PR title
community[patch]: Invoke callback prior to yielding

PR message
Description: Invoke on_llm_new_token callback prior to yielding token in
_stream and _astream methods.
Issue: https://github.com/langchain-ai/langchain/issues/16913
Dependencies: None
Twitter handle: None
2024-02-28 21:40:53 +00:00
Eugene Yurtsev
cd52433ba0 community[minor]: Add SQLDatabaseLoader document loader (#18281)
- **Description:** A generic document loader adapter for SQLAlchemy on
top of LangChain's `SQLDatabaseLoader`.
  - **Needed by:** https://github.com/crate-workbench/langchain/pull/1
  - **Depends on:** GH-16655
  - **Addressed to:** @baskaryan, @cbornet, @eyurtsev

Hi from CrateDB again,

in the same spirit like GH-16243 and GH-16244, this patch breaks out
another commit from https://github.com/crate-workbench/langchain/pull/1,
in order to reduce the size of this patch before submitting it, and to
separate concerns.

To accompany the SQLAlchemy adapter implementation, the patch includes
integration tests for both SQLite and PostgreSQL. Let me know if
corresponding utility resources should be added at different spots.

With kind regards,
Andreas.


### Software Tests

```console
docker compose --file libs/community/tests/integration_tests/document_loaders/docker-compose/postgresql.yml up
```

```console
cd libs/community
pip install psycopg2-binary
pytest -vvv tests/integration_tests -k sqldatabase
```

```
14 passed
```



![image](https://github.com/langchain-ai/langchain/assets/453543/42be233c-eb37-4c76-a830-474276e01436)

---------

Co-authored-by: Andreas Motl <andreas.motl@crate.io>
2024-02-28 21:02:28 +00:00
David Ruan
af35e2525a community[minor]: add hugging_face_model document loader (#17323)
- **Description:** add hugging_face_model document loader,
  - **Issue:** NA,
  - **Dependencies:** NA,

---------

Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
2024-02-28 20:05:35 +00:00
Sanjaypranav V M
b9a495e56e community[patch]: added latin-1 decoder to gmail search tool (#18116)
some mails from flipkart , amazon are encoded with other plain text
format so to handle UnicodeDecode error , added exception and latin
decoder

Thank you for contributing to LangChain!

@hwchase17
2024-02-28 19:28:29 +00:00
Ashley Xu
e3211c2b3d community[patch]: BigQueryVectorSearch JSON type unsupported for metadatas (#18234) 2024-02-28 08:19:53 -08:00
Ayo Ayibiowu
ac1d7d9de8 community[feat]: Adds LLMLingua as a document compressor (#17711)
**Description**: This PR adds support for using the [LLMLingua project
](https://github.com/microsoft/LLMLingua) especially the LongLLMLingua
(Enhancing Large Language Model Inference via Prompt Compression) as a
document compressor / transformer.

The LLMLingua project is an interesting project that can greatly improve
RAG system by compressing prompts and contexts while keeping their
semantic relevance.

**Issue**: https://github.com/microsoft/LLMLingua/issues/31
**Dependencies**: [llmlingua](https://pypi.org/project/llmlingua/)

@baskaryan

---------

Co-authored-by: Ayodeji Ayibiowu <ayodeji.ayibiowu@getinge.com>
Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-27 19:23:56 -08:00
Jaskirat Singh
ce682f5a09 community: vectorstores.kdbai - Added support for when no docs are present (#18103)
- **Description:** By default it expects a list but that's not the case
in corner scenarios when there is no document ingested(use case:
Bootstrap application).
\
Hence added as check, if the instance is panda Dataframe instead of list
then it will procced with return immediately.

- **Issue:** NA
- **Dependencies:** NA
- **Twitter handle:**  jaskiratsingh1

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
2024-02-26 12:47:06 -08:00
am-kinetica
9b8f6455b1 Langchain vectorstore integration with Kinetica (#18102)
- **Description:** New vectorstore integration with the Kinetica
database
  - **Issue:** 
- **Dependencies:** the Kinetica Python API `pip install
gpudb==7.2.0.1`,
  - **Tag maintainer:** @baskaryan, @hwchase17 
  - **Twitter handle:**

---------

Co-authored-by: Chad Juliano <cjuliano@kinetica.com>
2024-02-26 12:46:48 -08:00
GoodBai
3589a135ef community: make SET allow_experimental_[engine]_index configurabe in vectorstores.clickhouse (#18107)
## Description & Issue
While following the official doc to use clickhouse as a vectorstore, I
found only the default `annoy` index is properly supported. But I want
to try another engine `usearch` for `annoy` is not properly supported on
ARM platforms.
Here is the settings I prefer:

``` python
settings = ClickhouseSettings(
    table="wiki_Ethereum",
    index_type="usearch",  # annoy by default
    index_param=[],
)
```
The above settings do not work for the command `set
allow_experimental_annoy_index=1` is hard-coded.
This PR will make sure the experimental feature follow the `index_type`
which is also consistent with Clickhouse's naming conventions.
2024-02-26 12:39:17 -08:00