- [x] **PR title**: "docs: add langchain-pull-md Markdown loader"
- [x] **PR message**:
- **Description:** This PR introduces the `langchain-pull-md` package to
the LangChain community. It includes a new document loader that utilizes
the pull.md service to convert URLs into Markdown format, particularly
useful for handling web pages rendered with JavaScript frameworks like
React, Angular, or Vue.js. This loader helps in efficient and reliable
Markdown conversion directly from URLs without local rendering, reducing
server load.
- **Issue:** NA
- **Dependencies:** requests >=2.25.1
- **Twitter handle:** https://x.com/eugeneevstafev?s=21
- [x] **Add tests and docs**:
1. Added unit tests to verify URL checking and conversion
functionalities.
2. Created a comprehensive example notebook detailing the usage of the
new loader.
- [x] **Lint and test**:
- Completed local testing using `make format`, `make lint`, and `make
test` commands as per the LangChain contribution guidelines.
**Related Links:**
- [Package Repository](https://github.com/chigwell/langchain-pull-md)
- [PyPI Package](https://pypi.org/project/langchain-pull-md/)
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
spell check
Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
- Example: "community: add foobar LLM"
- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
Add option to return content and artifacts, to also be able to access
the full info of the retrieved documents.
They are returned as a list of dicts in the `artifacts` property if
parameter `response_format` is set to `"content_and_artifact"`.
Defaults to `"content"` to keep current behavior.
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
(Inspired by https://github.com/langchain-ai/langchain/issues/26918)
We rely on some deprecated public functions in the hot path for tool
binding (`convert_pydantic_to_openai_function`,
`convert_python_function_to_openai_function`, and
`format_tool_to_openai_function`). My understanding is that what is
deprecated is not the functionality they implement, but use of them in
the public API -- we expect to continue to rely on them.
Here we update these functions to be private and not deprecated. We keep
the public, deprecated functions as simple wrappers that can be safely
deleted.
The `@deprecated` wrapper adds considerable latency due to its use of
the `inspect` module. This update speeds up `bind_tools` by a factor of
~100x:
Before:

After:

---------
Co-authored-by: Erick Friis <erick@langchain.dev>
Thank you for contributing to LangChain!
- [x] **PR title**: "docs: fix typo"
- [x] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a minor fix of typo
- **Issue:** NA
- **Dependencies:** NA
- **Twitter handle:** NA
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. ~~a test for the integration, preferably unit tests that do not rely
on network access,~~
2. ~~an example notebook showing its use. It lives in
`docs/docs/integrations` directory.~~
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Add a retriever to interact with Dappier APIs with an example notebook.
The retriever can be invoked with:
```python
from langchain_dappier import DappierRetriever
retriever = DappierRetriever(
data_model_id="dm_01jagy9nqaeer9hxx8z1sk1jx6",
k=5
)
retriever.invoke("latest tech news")
```
To retrieve 5 documents related to latest news in the tech sector. The
included notebook also includes deeper details about controlling filters
such as selecting a data model, number of documents to return, site
domain reference, minimum articles from the reference domain, and search
algorithm, as well as including the retriever in a chain.
The integration package can be found over here -
https://github.com/DappierAI/langchain-dappier
Problem:
"Optional" object is used in one example without importing, which raises
the following error when copying the example into IDE or Jupyter Lab

Solution:
Just importing Optional from typing_extensions module, this solves the
problem!
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
## Description
This pull request updates the documentation for FAISS regarding filter
construction, following the changes made in commit `df5008f`.
## Issue
None. This is a follow-up PR for documentation of
[#28207](https://github.com/langchain-ai/langchain/pull/28207)
## Dependencies:
None.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
This commit updates the documentation and package registry for the
FalkorDB Chat Message History integration.
**Changes:**
- Added a comprehensive example notebook
falkordb_chat_message_history.ipynb demonstrating how to use FalkorDB
for session-based chat message storage.
- Added a provider notebook for FalkorDB
- Updated libs/packages.yml to register FalkorDB as an integration
package, following LangChain's new guidelines for community
integrations.
**Notes:**
- This update aligns with LangChain's process for registering new
integrations via documentation updates and package registry
modifications.
- No functional or core package changes were made in this commit.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
- In this PR, I have updated the AzureML Endpoint with the latest
endpoint.
- **Description:** I have changed the existing `/chat/completions` to
`/models/chat/completions` in
libs/community/langchain_community/llms/azureml_endpoint.py
- **Issue:** #25702
---------
Co-authored-by: = <=>
This pull request updates the documentation in
`docs/docs/how_to/custom_tools.ipynb` to reflect the recommended
approach for generating JSON schemas in Pydantic. Specifically, it
replaces instances of the deprecated `schema()` method with the newer
and more versatile `model_json_schema()`.
**Description:**
This PR updates the codebase to reflect the deprecation of the AgentType
feature. It includes the following changes:
Documentation Update:
Added a deprecation notice to the AgentType class comment.
Provided a reference to the official LangChain migration guide for
transitioning to LangGraph agents.
Reference Link: https://python.langchain.com/docs/how_to/migrate_agent/
**Twitter handle:** @hrrrriiiishhhhh
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
## Description
To integrate ModelScope inference API endpoints for both Embeddings,
LLMs and ChatModels, install the package
`langchain-modelscope-integration` (as discussed in issue #28928 ). This
is necessary because the package name `langchain-modelscope` was already
registered by another party.
ModelScope is a premier platform designed to connect model checkpoints
with model applications. It provides the necessary infrastructure to
share open models and promote model-centric development. For more
information, visit GitHub page:
[ModelScope](https://github.com/modelscope).
Thank you for contributing to LangChain!
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
- Example: "community: add foobar LLM"
- **Description:** Update docs to add BoxBlobLoader and extra_fields to
all Box connectors.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** @BoxPlatform
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
- **Description:**
This PR addresses an issue with the `stop_sequences` field in the
`ChatGroq` class. Currently, the field is defined as:
```python
stop: Optional[Union[List[str], str]] = Field(None, alias="stop_sequences")
```
This causes the language server (LSP) to raise an error indicating that
the `stop_sequences` parameter must be implemented. The issue occurs
because `Field(None, alias="stop_sequences")` is different compared to
`Field(default=None, alias="stop_sequences")`.

To resolve the issue, the field is updated to:
```python
stop: Optional[Union[List[str], str]] = Field(default=None, alias="stop_sequences")
```
While this issue does not affect runtime behavior, it ensures
compatibility with LSPs and improves the development experience.
- **Issue:** N/A
- **Dependencies:** None
- Title: Fix typo to correct "embedding" to "embeddings" in PGVector
initialization example
- Problem: There is a typo in the example code for initializing the
PGVector class. The current parameter "embedding" is incorrect as the
class expects "embeddings".
- Correction: The corrected code snippet is:
vector_store = PGVector(
embeddings=embeddings,
collection_name="my_docs",
connection="postgresql+psycopg://...",
)
In the previous commit, the cached model key for this model was omitted.
When using the "gpt-4o-2024-11-20" model, the token count in the
callback appeared as 0, and the cost was recorded as 0.
We add model and cost information so that the token count and cost can
be displayed for the respective model.
- The message before modification is as follows.
```
Tokens Used: 0
Prompt Tokens: 0
Prompt Tokens Cached: 0
Completion Tokens: 0
Reasoning Tokens: 0
Successful Requests: 0
Total Cost (USD): $0.0
```
- The message after modification is as follows.
```
Tokens Used: 3783
Prompt Tokens: 3625
Prompt Tokens Cached: 2560
Completion Tokens: 158
Reasoning Tokens: 0
Successful Requests: 1
Total Cost (USD): $0.010642500000000001
```