This pull request updates the documentation in
`docs/docs/how_to/custom_tools.ipynb` to reflect the recommended
approach for generating JSON schemas in Pydantic. Specifically, it
replaces instances of the deprecated `schema()` method with the newer
and more versatile `model_json_schema()`.
**Description:**
This PR updates the codebase to reflect the deprecation of the AgentType
feature. It includes the following changes:
Documentation Update:
Added a deprecation notice to the AgentType class comment.
Provided a reference to the official LangChain migration guide for
transitioning to LangGraph agents.
Reference Link: https://python.langchain.com/docs/how_to/migrate_agent/
**Twitter handle:** @hrrrriiiishhhhh
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
## Description
To integrate ModelScope inference API endpoints for both Embeddings,
LLMs and ChatModels, install the package
`langchain-modelscope-integration` (as discussed in issue #28928 ). This
is necessary because the package name `langchain-modelscope` was already
registered by another party.
ModelScope is a premier platform designed to connect model checkpoints
with model applications. It provides the necessary infrastructure to
share open models and promote model-centric development. For more
information, visit GitHub page:
[ModelScope](https://github.com/modelscope).
Thank you for contributing to LangChain!
- [x] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
- Example: "community: add foobar LLM"
- **Description:** Update docs to add BoxBlobLoader and extra_fields to
all Box connectors.
- **Issue:** N/A
- **Dependencies:** N/A
- **Twitter handle:** @BoxPlatform
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
- **Description:**
This PR addresses an issue with the `stop_sequences` field in the
`ChatGroq` class. Currently, the field is defined as:
```python
stop: Optional[Union[List[str], str]] = Field(None, alias="stop_sequences")
```
This causes the language server (LSP) to raise an error indicating that
the `stop_sequences` parameter must be implemented. The issue occurs
because `Field(None, alias="stop_sequences")` is different compared to
`Field(default=None, alias="stop_sequences")`.

To resolve the issue, the field is updated to:
```python
stop: Optional[Union[List[str], str]] = Field(default=None, alias="stop_sequences")
```
While this issue does not affect runtime behavior, it ensures
compatibility with LSPs and improves the development experience.
- **Issue:** N/A
- **Dependencies:** None
- Title: Fix typo to correct "embedding" to "embeddings" in PGVector
initialization example
- Problem: There is a typo in the example code for initializing the
PGVector class. The current parameter "embedding" is incorrect as the
class expects "embeddings".
- Correction: The corrected code snippet is:
vector_store = PGVector(
embeddings=embeddings,
collection_name="my_docs",
connection="postgresql+psycopg://...",
)
In the previous commit, the cached model key for this model was omitted.
When using the "gpt-4o-2024-11-20" model, the token count in the
callback appeared as 0, and the cost was recorded as 0.
We add model and cost information so that the token count and cost can
be displayed for the respective model.
- The message before modification is as follows.
```
Tokens Used: 0
Prompt Tokens: 0
Prompt Tokens Cached: 0
Completion Tokens: 0
Reasoning Tokens: 0
Successful Requests: 0
Total Cost (USD): $0.0
```
- The message after modification is as follows.
```
Tokens Used: 3783
Prompt Tokens: 3625
Prompt Tokens Cached: 2560
Completion Tokens: 158
Reasoning Tokens: 0
Successful Requests: 1
Total Cost (USD): $0.010642500000000001
```
Hi Erick. Coming back from a previous attempt, we now made a separate
package for the CrateDB adapter, called `langchain-cratedb`, as advised.
Other than registering the package within `libs/packages.yml`, this
patch includes a minimal amount of documentation to accompany the advent
of this new package. Let us know about any mistakes we made, or changes
you would like to see. Thanks, Andreas.
## About
- **Description:** Register a new database adapter package,
`langchain-cratedb`, providing traditional vector store, document
loader, and chat message history features for a start.
- **Addressed to:** @efriis, @eyurtsev
- **References:** GH-27710
- **Preview:** [Providers » More »
CrateDB](https://langchain-git-fork-crate-workbench-register-la-4bf945-langchain.vercel.app/docs/integrations/providers/cratedb/)
## Status
- **PyPI:** https://pypi.org/project/langchain-cratedb/
- **GitHub:** https://github.com/crate/langchain-cratedb
- **Documentation (CrateDB):**
https://cratedb.com/docs/guide/integrate/langchain/
- **Documentation (LangChain):** _This PR._
## Backlog?
Is this applicable for this kind of patch?
> - [ ] **Add tests and docs**: If you're adding a new integration,
please include
> 1. a test for the integration, preferably unit tests that do not rely
on network access,
> 2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
## Q&A
1. Notebooks that use the LangChain CrateDB adapter are currently at
[CrateDB LangChain
Examples](https://github.com/crate/cratedb-examples/tree/main/topic/machine-learning/llm-langchain),
and the documentation refers to them. Because they are derived from very
old blueprints coming from LangChain 0.0.x times, we guess they need a
refresh before adding them to `docs/docs/integrations`. Is it applicable
to merge this minimal package registration + documentation patch, which
already includes valid code snippets in `cratedb.mdx`, and add
corresponding notebooks on behalf of a subsequent patch later?
2. How would it work getting into the tabular list of _Integration
Packages_ enumerated on the [documentation entrypoint page about
Providers](https://python.langchain.com/docs/integrations/providers/)?
/cc Please also review, @ckurze, @wierdvanderhaar, @kneth,
@simonprickett, if you can find the time. Thanks!
- **Description:** `embed_documents` and `embed_query` was throwing off
the error as stated in the issue. The issue was that `Llama` client is
returning the embeddings in a nested list which is not being accounted
for in the current implementation and therefore the stated error is
being raised.
- **Issue:** #28813
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
community: optimize DataFrame document loader
**Description:**
Simplify the `lazy_load` method in the DataFrame document loader by
combining text extraction and metadata cleanup into a single operation.
This makes the code more concise while maintaining the same
functionality.
**Issue:** N/A
**Dependencies:** None
**Twitter handle:** N/A
- **Description:** The aload function, contrary to its name, is not an
asynchronous function, so it cannot work concurrently with other
asynchronous functions.
- **Issue:** #28336
- **Test: **: Done
- **Docs: **
[here](e0a95e5646/docs/docs/integrations/document_loaders/web_base.ipynb (L201))
- **Lint: ** All checks passed
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
**Issue**:
This tutorial depends on langgraph, however Langgraph is not mentioned
on the installation section for the tutorial, which raises an error when
copying and pasting the code snippets as following:

**Solution**:
Just adding langgraph package to installation section, for both pip and
Conda tabs as this tutorial requires it.
**Description:**
Added ability to set `prefix` attribute to prevent error :
```
httpx.HTTPStatusError: Error response 400 while fetching https://api.mistral.ai/v1/chat/completions: {"object":"error","message":"Expected last role User or Tool (or Assistant with prefix True) for serving but got assistant","type":"invalid_request_error","param":null,"code":null}
```
Co-authored-by: Sylvain DEPARTE <sylvain.departe@wizbii.com>
- *[x] **PR title**: "community: adding langchain-predictionguard
partner package documentation"
- *[x] **PR message**:
- **Description:** This PR adds documentation for the
langchain-predictionguard package to main langchain repo, along with
deprecating current Prediction Guard LLMs package. The LLMs package was
previously broken, so I also updated it one final time to allow it to
continue working from this point onward. . This enables users to chat
with LLMs through the Prediction Guard ecosystem.
- **Package Links**:
- [PyPI](https://pypi.org/project/langchain-predictionguard/)
- [Github
Repo](https://www.github.com/predictionguard/langchain-predictionguard)
- **Issue:** None
- **Dependencies:** None
- **Twitter handle:** [@predictionguard](https://x.com/predictionguard)
- *[x] **Add tests and docs**: All docs have been added for the partner
package, and the current LLMs package test was updated to reflect
changes.
- *[x] **Lint and test**: Linting tests are all passing.
---------
Co-authored-by: ccurme <chester.curme@gmail.com>
Issue: several Google integrations are implemented on the
[github.com/googleapis](https://github.com/googleapis) organization
repos and these integrations are almost lost. But they are essential
integrations.
Change: added a list of all packages that have Google integrations.
Added a description of this situation.
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: ccurme <chester.curme@gmail.com>
When using tools with optional parameters, the parameter `type` is not
longer available since langchain update to 0.3 (because of the pydantic
upgrade?) and there is now an `anyOf` field instead. This results in the
`type` being `None` in the chat request for the tool parameter, and the
LLM call fails with the error:
```
oci.exceptions.ServiceError: {'target_service': 'generative_ai_inference',
'status': 400, 'code': '400',
'opc-request-id': '...',
'message': 'Parameter definition must have a type.',
'operation_name': 'chat'
...
}
```
Example code that fails:
```
from langchain_community.chat_models.oci_generative_ai import ChatOCIGenAI
from langchain_core.tools import tool
from typing import Optional
llm = ChatOCIGenAI(
model_id="cohere.command-r-plus",
service_endpoint="https://inference.generativeai.us-chicago-1.oci.oraclecloud.com",
compartment_id="ocid1.compartment.oc1...",
auth_profile="your_profile",
auth_type="API_KEY",
model_kwargs={"temperature": 0, "max_tokens": 3000},
)
@tool
def test(example: Optional[str] = None):
"""This is the tool to use to test things
Args:
example: example variable, defaults to None
"""
return "this is a test"
llm_with_tools = llm.bind_tools([test])
result = llm_with_tools.invoke("can you make a test for g")
```
This PR sets the param type to `any` in that case, and fixes the
problem.
Co-authored-by: Erick Friis <erick@langchain.dev>
## Description
(This PR has contributions from @khushiDesai, @ashvini8, and
@ssumaiyaahmed).
This PR addresses **Issue #11229** which addresses the need for SQL
support in document parsing. This is integrated into the generic
TreeSitter parsing library, allowing LangChain users to easily load
codebases in SQL into smaller, manageable "documents."
This pull request adds a new ```SQLSegmenter``` class, which provides
the SQL integration.
## Issue
**Issue #11229**: Add support for a variety of languages to
LanguageParser
## Testing
We created a file ```test_sql.py``` with several tests to ensure the
```SQLSegmenter``` is functional. Below are the tests we added:
- ```def test_is_valid```: Checks SQL validity.
- ```def test_extract_functions_classes```: Extracts individual SQL
statements.
- ```def test_simplify_code```: Simplifies SQL code with comments.
---------
Co-authored-by: Syeda Sumaiya Ahmed <114104419+ssumaiyaahmed@users.noreply.github.com>
Co-authored-by: ashvini hunagund <97271381+ashvini8@users.noreply.github.com>
Co-authored-by: Khushi Desai <khushi.desai@advantawitty.com>
Co-authored-by: Khushi Desai <59741309+khushiDesai@users.noreply.github.com>
Co-authored-by: ccurme <chester.curme@gmail.com>