Populate API reference for test class properties and test methods for
chat models.
Also:
- Make `standard_chat_model_params` private.
- `pytest.skip` some tests that were previously passed if features are
not supported.
This PR updates the Pinecone client to `5.4.0`, as well as its
dependencies (`pinecone-plugin-inference` and
`pinecone-plugin-interface`).
Note: `pinecone-client` is now simply called `pinecone`.
**Question for reviewer(s):** should this PR also update the `pinecone`
dep in [the root dir's `poetry.lock`
file](https://github.com/langchain-ai/langchain/blob/master/poetry.lock#L6729)?
Was unsure. (I don't believe so b/c it seems pinned to a lower version
likely based on 3rd-party deps (e.g. Unstructured).)
--
TW: @audrey_sage_
---
- To see the specific tasks where the Asana app for GitHub is being
used, see below:
- https://app.asana.com/0/0/1208693659122374
This PR adds an additional method to `Chroma` to retrieve the embedding
vectors, besides the most relevant Documents. This is sometimes of use
when you need to run a postprocessing algorithm on the retrieved results
based on the vectors, which has been the case for me lately.
Example issue (discussion) requesting this change:
https://github.com/langchain-ai/langchain/discussions/20383
---------
Co-authored-by: ccurme <chester.curme@gmail.com>
Follows on from #27991, updates the langchain-community package to
support numpy 2 versions
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
**Description:** When an OpenAI assistant is invoked, it creates a run
by default, allowing users to set only a few request fields. The
truncation strategy is set to auto, which includes previous messages in
the thread along with the current question until the context length is
reached. This causes token usage to grow incrementally:
consumed_tokens = previous_consumed_tokens + current_consumed_tokens.
This PR adds support for user-defined truncation strategies, giving
better control over token consumption.
**Issue:** High token consumption.
## Description
This PR addresses the following:
**Fixes Issue #25343:**
- Adds additional logic to parse shallowly nested JSON-encoded strings
in tool call arguments, allowing for proper parsing of responses like
that of Llama3.1 and 3.2 with nested schemas.
**Adds Integration Test for Fix:**
- Adds a Ollama specific integration test to ensure the issue is
resolved and to prevent regressions in the future.
**Fixes Failing Integration Tests:**
- Fixes failing integration tests (even prior to changes) caused by
`llama3-groq-tool-use` model. Previously,
tests`test_structured_output_async` and
`test_structured_output_optional_param` failed due to the model not
issuing a tool call in the response. Resolved by switching to
`llama3.1`.
## Issue
Fixes#25343.
## Dependencies
No dependencies.
____
Done in collaboration with @ishaan-upadhyay @mirajismail @ZackSteine.
- **Description:** `add_texts` was using `get_setting` for marqo client
which was being used according to 1.5.x API version. However, this PR
updates the `add_text` accounting for updated response payload for 2.x
and later while maintaining backward compatibility. Plus I have verified
this was the only place where marqo client was not accounting for
updated API version.
- **Issue:** #28323
---------
Co-authored-by: ccurme <chester.curme@gmail.com>
_This should only be merged once neo4j is included under libs/partners._
# **Description:**
Neo4jVector from langchain-community is being moved to langchain-neo4j:
[see
link](https://github.com/langchain-ai/langchain-neo4j/blob/main/libs/neo4j/langchain_neo4j/vectorstores/neo4j_vector.py#L436).
To solve the issue below, this PR adds an attempt to import
`Neo4jVector` from the partner package `langchain-neo4j`, similarly to
the other partner packages.
# **Issue:**
When initializing `SelfQueryRetriever`, the following error is raised:
```
ValueError: Self query retriever with Vector Store type <class 'langchain_neo4j.vectorstores.neo4j_vector.Neo4jVector'> not supported.
```
[See related
issue](https://github.com/langchain-ai/langchain/issues/19748).
# **Dependencies:**
- langchain-neo4j
v0.4 of the Python SDK is already installed via the lock file in CI, but
our current implementation is not compatible with it.
This also addresses an issue introduced in
https://github.com/langchain-ai/langchain/pull/28299. @RyanMagnuson
would you mind explaining the motivation for that change? From what I
can tell the Ollama SDK [does not support
kwargs](6c44bb2729/ollama/_client.py (L286)).
Previously, unsupported kwargs were ignored, but they currently raise
`TypeError`.
Some of LangChain's standard test suite expects `tool_choice` to be
supported, so here we catch it in `bind_tools` so it is ignored and not
passed through to the client.
Adds deprecation notices for Neo4j components moving to the
`langchain_neo4j` partner package.
- Adds deprecation warnings to all Neo4j-related classes and functions
that have been migrated to the new `langchain_neo4j` partner package
- Updates documentation to reference the new `langchain_neo4j` package
instead of `langchain_community`