This PR adds an additional method to `Chroma` to retrieve the embedding
vectors, besides the most relevant Documents. This is sometimes of use
when you need to run a postprocessing algorithm on the retrieved results
based on the vectors, which has been the case for me lately.
Example issue (discussion) requesting this change:
https://github.com/langchain-ai/langchain/discussions/20383
---------
Co-authored-by: ccurme <chester.curme@gmail.com>
Follows on from #27991, updates the langchain-community package to
support numpy 2 versions
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
**Description:** When an OpenAI assistant is invoked, it creates a run
by default, allowing users to set only a few request fields. The
truncation strategy is set to auto, which includes previous messages in
the thread along with the current question until the context length is
reached. This causes token usage to grow incrementally:
consumed_tokens = previous_consumed_tokens + current_consumed_tokens.
This PR adds support for user-defined truncation strategies, giving
better control over token consumption.
**Issue:** High token consumption.
**Description:** Added a cookbook that showcase how to build a RAG agent
pipeline locally using open-source LLM and embedding models on Intel
Xeon CPU. It uses Llama 3.1:8B model from Ollama for LLM and
nomic-embed-text-v1.5 from NomicEmbeddings for embeddings. The whole
experiment is developed and tested on Intel 4th Gen Xeon Scalable CPU.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
## Description
This PR addresses the following:
**Fixes Issue #25343:**
- Adds additional logic to parse shallowly nested JSON-encoded strings
in tool call arguments, allowing for proper parsing of responses like
that of Llama3.1 and 3.2 with nested schemas.
**Adds Integration Test for Fix:**
- Adds a Ollama specific integration test to ensure the issue is
resolved and to prevent regressions in the future.
**Fixes Failing Integration Tests:**
- Fixes failing integration tests (even prior to changes) caused by
`llama3-groq-tool-use` model. Previously,
tests`test_structured_output_async` and
`test_structured_output_optional_param` failed due to the model not
issuing a tool call in the response. Resolved by switching to
`llama3.1`.
## Issue
Fixes#25343.
## Dependencies
No dependencies.
____
Done in collaboration with @ishaan-upadhyay @mirajismail @ZackSteine.
**Description:** Fixed a grammatical error in the documentation section
on embedding vectors. Replaced "Embedding vectors can be comparing" with
"Embedding vectors can be compared."
**Issue:** N/A (This is a minor documentation fix with no linked issue.)
**Dependencies:** None.
- **Description:** `add_texts` was using `get_setting` for marqo client
which was being used according to 1.5.x API version. However, this PR
updates the `add_text` accounting for updated response payload for 2.x
and later while maintaining backward compatibility. Plus I have verified
this was the only place where marqo client was not accounting for
updated API version.
- **Issue:** #28323
---------
Co-authored-by: ccurme <chester.curme@gmail.com>
_This should only be merged once neo4j is included under libs/partners._
# **Description:**
Neo4jVector from langchain-community is being moved to langchain-neo4j:
[see
link](https://github.com/langchain-ai/langchain-neo4j/blob/main/libs/neo4j/langchain_neo4j/vectorstores/neo4j_vector.py#L436).
To solve the issue below, this PR adds an attempt to import
`Neo4jVector` from the partner package `langchain-neo4j`, similarly to
the other partner packages.
# **Issue:**
When initializing `SelfQueryRetriever`, the following error is raised:
```
ValueError: Self query retriever with Vector Store type <class 'langchain_neo4j.vectorstores.neo4j_vector.Neo4jVector'> not supported.
```
[See related
issue](https://github.com/langchain-ai/langchain/issues/19748).
# **Dependencies:**
- langchain-neo4j
v0.4 of the Python SDK is already installed via the lock file in CI, but
our current implementation is not compatible with it.
This also addresses an issue introduced in
https://github.com/langchain-ai/langchain/pull/28299. @RyanMagnuson
would you mind explaining the motivation for that change? From what I
can tell the Ollama SDK [does not support
kwargs](6c44bb2729/ollama/_client.py (L286)).
Previously, unsupported kwargs were ignored, but they currently raise
`TypeError`.
Some of LangChain's standard test suite expects `tool_choice` to be
supported, so here we catch it in `bind_tools` so it is ignored and not
passed through to the client.
Adds deprecation notices for Neo4j components moving to the
`langchain_neo4j` partner package.
- Adds deprecation warnings to all Neo4j-related classes and functions
that have been migrated to the new `langchain_neo4j` partner package
- Updates documentation to reference the new `langchain_neo4j` package
instead of `langchain_community`