Thank you for contributing to LangChain!
- **Description:** Adding an empty metadata field when metadata is not
present in the data
- **Issue:** This PR fixes the issue when the data items doesn't contain
the metadata field. This happens when there is already data in the
container, or cx uses CosmosDB Python SDK to insert data.
- **Dependencies:** No dependencies required
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
- [x] **PR title**:
"community: OCI Generative AI tool calling bug fix
- [x] **PR message**:
- **Description:** bug fix for streaming chat responses with tool calls.
Update to PR 24693
- **Issue:** chat response content is repeated when streaming
- **Dependencies:** NA
- **Twitter handle:** NA
- [x] **Add tests and docs**: NA
- [x] **Lint and test**: make format, make lint and make test we run
successfully
---------
Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
### About
- **Description:** In the Gitlab utilities used for the Gitlab tool
there is no check to prevent pushing to the main branch, as this is
already done for Github (for example here:
5a2cfb49e0/libs/community/langchain_community/utilities/github.py (L587)).
This PR add this check as already done for Github.
- **Issue:** None
- **Dependencies:** None
**Description:** Add support for Writer chat models
**Issue:** N/A
**Dependencies:** Add `writer-sdk` to optional dependencies.
**Twitter handle:** Please tag `@samjulien` and `@Get_Writer`
**Tests and docs**
- [x] Unit test
- [x] Example notebook in `docs/docs/integrations` directory.
**Lint and test**
- [x] Run `make format`
- [x] Run `make lint`
- [x] Run `make test`
---------
Co-authored-by: Johannes <tolstoy.work@gmail.com>
Co-authored-by: Erick Friis <erick@langchain.dev>
**Description:**
- Fix bug in Replicate LLM class, where it was looking for parameter
names in a place where they no longer exist in pydantic 2, resulting in
the "Field required" validation error described in the issue.
- Fix Replicate LLM integration tests to:
- Use active models on Replicate.
- Use the correct model parameter `max_new_tokens` as shown in the
[Replicate
docs](https://replicate.com/docs/guides/language-models/how-to-use#minimum-and-maximum-new-tokens).
- Use callbacks instead of deprecated callback_manager.
**Issue:** #26937
**Dependencies:** n/a
**Twitter handle:** n/a
---------
Signed-off-by: Fayvor Love <fayvor@gmail.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Thank you for contributing to LangChain!
**Description:** Added the model parameters to be passed in the OpenAI
Assistant. Enabled it at the `OpenAIAssistantV2Runnable` class.
**Issue:** NA
**Dependencies:** None
**Twitter handle:** luizf0992
Thank you for contributing to LangChain!
- **Description:** Add token_usage and model_name metadata to
ChatZhipuAI stream() and astream() response
- **Issue:** None
- **Dependencies:** None
- **Twitter handle:** None
- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
Co-authored-by: jianfehuang <jianfehuang@tencent.com>
**Description:**
- Add the `lora_request` parameter to the VLLM class to support LoRA
model configurations. This enhancement allows users to specify LoRA
requests directly when using VLLM, enabling more flexible and efficient
model customization.
**Issue:**
- No existing issue for `lora_adapter` in VLLM. This PR addresses the
need for configuring LoRA requests within the VLLM framework.
- Reference : [Using LoRA Adapters in
vLLM](https://docs.vllm.ai/en/stable/models/lora.html#using-lora-adapters)
**Example Code :**
Before this change, the `lora_request` parameter was not applied
correctly:
```python
ADAPTER_PATH = "/path/of/lora_adapter"
llm = VLLM(model="Bllossom/llama-3.2-Korean-Bllossom-3B",
max_new_tokens=512,
top_k=2,
top_p=0.90,
temperature=0.1,
vllm_kwargs={
"gpu_memory_utilization":0.5,
"enable_lora":True,
"max_model_len":1024,
}
)
print(llm.invoke(
["...prompt_content..."],
lora_request=LoRARequest("lora_adapter", 1, ADAPTER_PATH)
))
```
**Before Change Output:**
```bash
response was not applied lora_request
```
So, I attempted to apply the lora_adapter to
langchain_community.llms.vllm.VLLM.
**current output:**
```bash
response applied lora_request
```
**Dependencies:**
- None
**Lint and test:**
- All tests and lint checks have passed.
---------
Co-authored-by: Um Changyong <changyong.um@sfa.co.kr>
**Description:**
This PR fixes an issue where non-ASCII characters in Pydantic field
descriptions were being escaped to their Unicode representations when
using `JsonOutputParser`. The change allows non-ASCII characters to be
preserved in the output, which is especially important for multilingual
support and when working with non-English languages.
**Issue:** Fixes#27256
**Example Code:**
```python
from pydantic import BaseModel, Field
from langchain_core.output_parsers import JsonOutputParser
class Article(BaseModel):
title: str = Field(description="科学文章的标题")
output_data_structure = Article
parser = JsonOutputParser(pydantic_object=output_data_structure)
print(parser.get_format_instructions())
```
**Previous Output**:
```... "title": {"description": "\\u79d1\\u5b66\\u6587\\u7ae0\\u7684\\u6807\\u9898", "title": "Title", "type": "string"}} ...```
**Current Output**:
```... "title": {"description": "科学文章的标题", "title": "Title", "type":
"string"}} ...```
**Changes made**:
- Modified `json.dumps()` call in
`langchain_core/output_parsers/json.py` to use `ensure_ascii=False`
- Added a unit test to verify Unicode handling
Co-authored-by: Harsimran-19 <harsimran1869@gmail.com>
**Description:**
When annotating a function with the @tool decorator, the symbol should
have type BaseTool. The previous type annotations did not convey that to
type checkers. This patch creates 4 overloads for the tool function for
the 4 different use cases.
1. @tool decorator with no arguments
2. @tool decorator with only keyword arguments
3. @tool decorator with a name argument (and possibly keyword arguments)
4. Invoking tool as function with a name and runnable positional
arguments
The main function is updated to match the overloads. The changes are
100% backwards compatible (all existing calls should continue to work,
just with better type annotations).
**Twitter handle:** @nvachhar
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
**Description:** Fixes None addition issues when an empty value is
passed on
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
**Description:** We improve the performance of the InMemoryVectorStore.
**Isue:** Originally, similarity was computed document by document:
```
for doc in self.store.values():
vector = doc["vector"]
similarity = float(cosine_similarity([embedding], [vector]).item(0))
```
This is inefficient and does not make use of numpy vectorization.
This PR computes the similarity in one vectorized go:
```
docs = list(self.store.values())
similarity = cosine_similarity([embedding], [doc["vector"] for doc in docs])
```
**Dependencies:** None
**Twitter handle:** @b12_consulting, @Vincent_Min
---------
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
**Description:** Returns the document id along with the Vector Search
results
**Issue:** Fixes https://github.com/langchain-ai/langchain/issues/26860
for CouchbaseVectorStore
- [x] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified.
Co-authored-by: Erick Friis <erick@langchain.dev>
Reopened as a personal repo outside the organization.
## Description
- Naver HyperCLOVA X community package
- Add chat model & embeddings
- Add unit test & integration test
- Add chat model & embeddings docs
- I changed partner
package(https://github.com/langchain-ai/langchain/pull/24252) to
community package on this PR
- Could this
embeddings(https://github.com/langchain-ai/langchain/pull/21890) be
deprecated? We are trying to replace it with embedding
model(**ClovaXEmbeddings**) in this PR.
Twitter handle: None. (if needed, contact with
joonha.jeon@navercorp.com)
---
you can check our previous discussion below:
> one question on namespaces - would it make sense to have these in
.clova namespaces instead of .naver?
I would like to keep it as is, unless it is essential to unify the
package name.
(ClovaX is a branding for the model, and I plan to add other models and
components. They need to be managed as separate classes.)
> also, could you clarify the difference between ClovaEmbeddings and
ClovaXEmbeddings?
There are 3 models that are being serviced by embedding, and all are
supported in the current PR. In addition, all the functionality of CLOVA
Studio that serves actual models, such as distinguishing between test
apps and service apps, is supported. The existing PR does not support
this content because it is hard-coded.
---------
Co-authored-by: Erick Friis <erick@langchain.dev>
Co-authored-by: Vadym Barda <vadym@langchain.dev>