Description: The multimodal(tongyi) response format "message": {"role":
"assistant", "content": [{"text": "图像"}]}}]} is not compatible with
LangChain.
Dependencies: No
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
**Description**:
This PR modifies the doc_intelligence.py parser in the community package
to include all metadata returned by the Azure Doc Intelligence API in
the Document object. Previously, only the parsed content (markdown) was
retained, while other important metadata such as bounding boxes (bboxes)
for images and tables was discarded. These image bboxes are crucial for
supporting use cases like multi-modal RAG workflows when using Azure Doc
Intelligence.
The change ensures that all information returned by the Azure Doc
Intelligence API is preserved by setting the metadata attribute of the
Document object to the entire result returned by the API, rather than an
empty dictionary. This extends the parser's utility for complex use
cases without breaking existing functionality.
**Issue**:
This change does not address a specific issue number, but it resolves a
critical limitation in supporting multimodal workflows when using the
LangChain wrapper for the Azure API.
**Dependencies**:
No additional dependencies are required for this change.
---------
Co-authored-by: jmohren <johannes.mohren@aol.de>
**Description:**
- **Memgraph** no longer relies on `Neo4jGraphStore` but **implements
`GraphStore`**, just like other graph databases.
- **Memgraph** no longer relies on `GraphQAChain`, but implements
`MemgraphQAChain`, just like other graph databases.
- The refresh schema procedure has been updated to try using `SHOW
SCHEMA INFO`. The fallback uses Cypher queries (a combination of schema
and Cypher) → **LangChain integration no longer relies on MAGE
library**.
- The **schema structure** has been reformatted. Regardless of the
procedures used to get schema, schema structure is the same.
- The `add_graph_documents()` method has been implemented. It transforms
`GraphDocument` into Cypher queries and creates a graph in Memgraph. It
implements the ability to use `baseEntityLabel` to improve speed
(`baseEntityLabel` has an index on the `id` property). It also
implements the ability to include sources by creating a `MENTIONS`
relationship to the source document.
- Jupyter Notebook for Memgraph has been updated.
- **Issue:** /
- **Dependencies:** /
- **Twitter handle:** supe_katarina (DX Engineer @ Memgraph)
Closes#25606
**Description**
This PR updates the `as_retriever` method in the `AzureSearch` to ensure
that the `search_type` parameter defaults to 'similarity' when not
explicitly provided.
Previously, if the `search_type` was omitted, it did not default to any
specific value. So it was inherited from
`AzureSearchVectorStoreRetriever`, which defaults to 'hybrid'.
This change ensures that the intended default behavior aligns with the
expected usage.
**Issue**
No specific issue was found related to this change.
**Dependencies**
No new dependencies are introduced with this change.
---------
Co-authored-by: prrao87 <prrao87@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
- [x] **PR title**: "community: Kuzu - Add graph documents via
LLMGraphTransformer"
- This PR adds a new method `add_graph_documents` to use the
`GraphDocument`s extracted by `LLMGraphTransformer` and store in a Kùzu
graph backend.
- This allows users to transform unstructured text into a graph that
uses Kùzu as the graph store.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
---------
Co-authored-by: pookam90 <pookam@microsoft.com>
Co-authored-by: Pooja Kamath <60406274+Pookam90@users.noreply.github.com>
Co-authored-by: hsm207 <hsm207@users.noreply.github.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
- Example: "community: add foobar LLM"
- **Description:** I realized the invocation parameters were not being
passed into `_generate` so I added those in but then realized that the
parameters contained some old fields designed for an older openai client
which I removed. Parameters work fine now.
- **Issue:** Fixes#28229
- **Dependencies:** No new dependencies.
- **Twitter handle:** @arch_plane
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
Co-authored-by: Erick Friis <erick@langchain.dev>
set open_browser to false to resolve "could not locate runnable browser"
error while default browser is None
Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
- Example: "community: add foobar LLM"
- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
Co-authored-by: Erick Friis <erick@langchain.dev>
# What problem are we fixing?
Currently documents loaded using `O365BaseLoader` fetch source from
`file.web_url` (where `file` is `<class 'O365.drive.File'>`). This works
well for `.pdf` documents. Unfortunately office documents (`.xlsx`,
`.docx` ...) pass their `web_url` in following format:
`https://sharepoint_address/sites/path/to/library/root/Doc.aspx?sourcedoc=%XXXXXXXX-1111-1111-XXXX-XXXXXXXXXX%7D&file=filename.xlsx&action=default&mobileredirect=true`
This obfuscates the path to the file. This PR utilizes the parrent
folder's path and file name to reconstruct the actual location of the
file. Knowing the file's location can be crucial for some RAG
applications (path to the file can carry information we don't want to
loose).
@vbarda Could you please look at this one? I'm @-mentioning you since
we've already closed some PRs together :-)
Co-authored-by: Erick Friis <erick@langchain.dev>
## **Description:**
Enable `ConfluenceLoader` to include labels with `include_labels` option
(`false` by default for backward compatibility). and the labels are set
to `metadata` in the `Document`. e.g. `{"labels": ["l1", "l2"]}`
## Notes
Confluence API supports to get labels by providing `metadata.labels` to
`expand` query parameter
All of the following functions support `expand` in the same way:
- confluence.get_page_by_id
- confluence.get_all_pages_by_label
- confluence.get_all_pages_from_space
- cql (internally using
[/api/content/search](https://developer.atlassian.com/cloud/confluence/rest/v1/api-group-content/#api-wiki-rest-api-content-search-get))
## **Issue:**
No issue related to this PR.
## **Dependencies:**
No changes.
## **Twitter handle:**
[@gymnstcs](https://x.com/gymnstcs)
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
- **Description:** Support for new Pinecone class PineconeVectorStore in
PebbloRetrievalQA.
- **Issue:** NA
- **Dependencies:** NA
- **Tests:** -
Co-authored-by: Erick Friis <erick@langchain.dev>
- **Description:** update MODEL_COST_PER_1K_TOKENS for new gpt-4o-11-20.
- **Issue:** with latest gpt-4o-11-20, openai callback return
token_cost=0.0
- **Dependencies:** None (just simple dict fix.)
- **Twitter handle:** I Don't Use Twitter.
- (However..., I have a YouTube channel. Could you upload this there, by
any chance?
https://www.youtube.com/@%EA%B2%9C%EC%B0%BD%EB%B6%80%EA%B3%A0%EB%AC%B8AI%EC%9E%90%EB%AC%B8%EC%84%BC%EC%84%B8)
- **Description:** Invalid `tool_choice` is given to `ChatLiteLLM` to
`bind_tools` due to it's parent's class default value being pass through
`with_structured_output`.
- **Issue:** #28176
**Issue:** Added support for creating indexes in the SAP HANA Vector
engine.
**Changes**:
1. Introduced a new function `create_hnsw_index` in `hanavector.py` that
enables the creation of indexes for SAP HANA Vector.
2. Added integration tests for the index creation function to ensure
functionality.
3. Updated the documentation to reflect the new index creation feature,
including examples and output from the notebook.
4. Fix the operator issue in ` _process_filter_object` function and
change the array argument to a placeholder in the similarity search SQL
statement.
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
**Description**:
> Without an API key, any site (IP address) posting more than 3 requests
per second to the E-utilities will receive an error message. By
including an API key, a site can post up to 10 requests per second by
default.
quoted from A General Introduction to the E-utilities,NCBI :
https://www.ncbi.nlm.nih.gov/books/NBK25497/
I have simply added a api_key parameter to the PubMedAPIWrapper that can
be used to increase the number of requests per second from 3 to 10.
**Twitter handle** : @KORmaori
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
- **Description:** Updated the kwargs for the structured query from
filters to filter due to deprecation of 'filters' for Databricks Vector
Search. Also changed the error messages as the allowed operators and
comparators are different which can cause issues with functions such as
get_query_constructor_prompt()
- **Issue:** Fixes the Key Error for filters due to deprecation in favor
for 'filter':
LangChainDeprecationWarning: DatabricksVectorSearch received a key
`filters` in search_kwargs. `filters` was deprecated since
langchain-community 0.2.11 and will be removed in 0.3. Please use
`filter` instead.
- **Dependencies:** N/A
- **Twitter handle:** N/A
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
- [x] **PR title**: "community: add Needle retriever and document loader
integration"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
- Example: "community: add foobar LLM"
- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** This PR adds a new integration for Needle, which
includes:
- **NeedleRetriever**: A retriever for fetching documents from Needle
collections.
- **NeedleLoader**: A document loader for managing and loading documents
into Needle collections.
- Example notebooks demonstrating usage have been added in:
- `docs/docs/integrations/retrievers/needle.ipynb`
- `docs/docs/integrations/document_loaders/needle.ipynb`.
- **Dependencies:** The `needle-python` package is required as an
external dependency for accessing Needle's API. It has been added to the
extended testing dependencies list.
- **Twitter handle:** Feel free to mention me if this PR gets announced:
[needlexai](https://x.com/NeedlexAI).
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. Unit tests have been added for both `NeedleRetriever` and
`NeedleLoader` in `libs/community/tests/unit_tests`. These tests mock
API calls to avoid relying on network access.
2. Example notebooks have been added to `docs/docs/integrations/`,
showcasing both retriever and loader functionality.
- [x] **Lint and test**: Run `make format`, `make lint`, and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
- `make format`: Passed
- `make lint`: Passed
- `make test`: Passed (requires `needle-python` to be installed locally;
this package is not added to LangChain dependencies).
Additional guidelines:
- [x] Optional dependencies are imported only within functions.
- [x] No dependencies have been added to pyproject.toml files except for
those required for unit tests.
- [x] The PR does not touch more than one package.
- [x] Changes are fully backwards compatible.
- [x] Community additions are not re-imported into LangChain core.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
This PR updates the Pinecone client to `5.4.0`, as well as its
dependencies (`pinecone-plugin-inference` and
`pinecone-plugin-interface`).
Note: `pinecone-client` is now simply called `pinecone`.
**Question for reviewer(s):** should this PR also update the `pinecone`
dep in [the root dir's `poetry.lock`
file](https://github.com/langchain-ai/langchain/blob/master/poetry.lock#L6729)?
Was unsure. (I don't believe so b/c it seems pinned to a lower version
likely based on 3rd-party deps (e.g. Unstructured).)
--
TW: @audrey_sage_
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1208693659122374
Follows on from #27991, updates the langchain-community package to
support numpy 2 versions
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
**Description:** When an OpenAI assistant is invoked, it creates a run
by default, allowing users to set only a few request fields. The
truncation strategy is set to auto, which includes previous messages in
the thread along with the current question until the context length is
reached. This causes token usage to grow incrementally:
consumed_tokens = previous_consumed_tokens + current_consumed_tokens.
This PR adds support for user-defined truncation strategies, giving
better control over token consumption.
**Issue:** High token consumption.
- **Description:** `add_texts` was using `get_setting` for marqo client
which was being used according to 1.5.x API version. However, this PR
updates the `add_text` accounting for updated response payload for 2.x
and later while maintaining backward compatibility. Plus I have verified
this was the only place where marqo client was not accounting for
updated API version.
- **Issue:** #28323
---------
Co-authored-by: ccurme <chester.curme@gmail.com>
Adds deprecation notices for Neo4j components moving to the
`langchain_neo4j` partner package.
- Adds deprecation warnings to all Neo4j-related classes and functions
that have been migrated to the new `langchain_neo4j` partner package
- Updates documentation to reference the new `langchain_neo4j` package
instead of `langchain_community`
**Description:**
Currently, the docstring for `LanceDB.__init__()` provides the default
value for `mode`, but not the list of valid values. This PR adds that
list to the docstring.
**Issue:**
N/A
**Dependencies:**
N/A
**Twitter handle:**
`@metadaddy`
[Leaving as a reminder: If no one reviews your PR within a few days,
please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda,
hwchase17.]
Fixed a compatibility issue in the `load_messages_from_context()`
function for the Kinetica chat model integration. The issue was caused
by stricter validation introduced in Pydantic 2.
In collaboration with @rlouf I build an
[outlines](https://dottxt-ai.github.io/outlines/latest/) integration for
langchain!
I think this is really useful for doing any type of structured output
locally.
[Dottxt](https://dottxt.co) spend alot of work optimising this process
at a lower level
([outlines-core](https://pypi.org/project/outlines-core/0.1.14/) written
in rust) so I think this is a better alternative over all current
approaches in langchain to do structured output.
It also implements the `.with_structured_output` method so it should be
a drop in replacement for a lot of applications.
The integration includes:
- **Outlines LLM class**
- **ChatOutlines class**
- **Tutorial Cookbooks**
- **Documentation Page**
- **Validation and error messages**
- **Exposes Outlines Structured output features**
- **Support for multiple backends**
- **Integration and Unit Tests**
Dependencies: `outlines` + additional (depending on backend used)
I am not sure if the unit-tests comply with all requirements, if not I
suggest to just remove them since I don't see a useful way to do it
differently.
### Quick overview:
Chat Models:
<img width="698" alt="image"
src="https://github.com/user-attachments/assets/05a499b9-858c-4397-a9ff-165c2b3e7acc">
Structured Output:
<img width="955" alt="image"
src="https://github.com/user-attachments/assets/b9fcac11-d3e5-4698-b1ae-8c4cb3d54c45">
---------
Co-authored-by: Vadym Barda <vadym@langchain.dev>
Thank you for reading my first PR!
**Description:**
Deduplicate content in AzureSearch vectorstore.
Currently, by default, the content of the retrieval is placed both in
metadata and page_content of a Document.
This PR removes the content from metadata, and leaves it in
page_content.
**Issue:**:
Previously, the content was popped from result before metadata was
populated.
In #25828 , the order was changed which leads to a response with
duplicated content.
This was not the intention of that PR and seems undesirable.
Looking forward to seeing my contribution in the next version!
Cheers,
Renzo
**Description:** Add tool calling and structured output support for
SambaNovaCloud chat models, docs included
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
# Description
- adding stopReason to response_metadata to call stream and astream
- excluding NCP_APIGW_API_KEY input required validation
- to remove warning Field "model_name" has conflict with protected
namespace "model_".
cc. @vbarda
Description:
* Updated the OpenSearchVectorStore to use the `engine` parameter
captured at `init()` time as the default when adding documents to the
store.
Formatted, Linted, and Tested.
Last week Anthropic released version 0.39.0 of its python sdk, which
enabled support for Python 3.13. This release deleted a legacy
`client.count_tokens` method, which we currently access during init of
the `Anthropic` LLM. Anthropic has replaced this functionality with the
[client.beta.messages.count_tokens()
API](https://github.com/anthropics/anthropic-sdk-python/pull/726).
To enable support for `anthropic >= 0.39.0` and Python 3.13, here we
drop support for the legacy token counting method, and add support for
the new method via `ChatAnthropic.get_num_tokens_from_messages`.
To fully support the token counting API, we update the signature of
`get_num_tokens_from_message` to accept tools everywhere.
---------
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Description:
* When working with OpenSearchVectorSearch to make
OpenSearchGraphVectorStore (coming soon), I noticed that there wasn't
type hinting for the underlying OpenSearch clients. This fixes that
issue.
* Confirmed tests are still passing with code changes.
Note that there is some additional code duplication now, but I think
this approach is cleaner overall.