# community: Fix AttributeError in RankLLMRerank (`list` object has no
attribute `candidates`)
## **Description**
This PR fixes an issue in `RankLLMRerank` where reranking fails with the
following error:
```
AttributeError: 'list' object has no attribute 'candidates'
```
The issue arises because `rerank_batch()` returns a `List[Result]`
instead of an object containing `.candidates`.
### **Changes Introduced**
- Adjusted `compress_documents()` to support both:
- Old API format: `rerank_results.candidates`
- New API format: `rerank_results` as a list
- Also fix wrong .txt location parsing while I was at it.
---
## **Issue**
Fixes **AttributeError** in `RankLLMRerank` when using
`compression_retriever.invoke()`. The issue is observed when
`rerank_batch()` returns a list instead of an object with `.candidates`.
**Relevant log:**
```
AttributeError: 'list' object has no attribute 'candidates'
```
## **Dependencies**
- No additional dependencies introduced.
---
## **Checklist**
- [x] **Backward compatible** with previous API versions
- [x] **Tested** locally with different RankLLM models
- [x] **No new dependencies introduced**
- [x] **Linted** with `make format && make lint`
- [x] **Ready for review**
---
## **Testing**
- Ran `compression_retriever.invoke(query)`
## **Reviewers**
If no review within a few days, please **@mention** one of:
- @baskaryan
- @efriis
- @eyurtsev
- @ccurme
- @vbarda
- @hwchase17
Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
- Example: "community: add foobar LLM"
- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
This PR adds a new cognee integration, knowledge graph based retrieval
enabling developers to ingest documents into cognee’s knowledge graph,
process them, and then retrieve context via CogneeRetriever.
It includes:
- langchain_cognee package with a CogneeRetriever class
- a test for the integration, demonstrating how to create, process, and
retrieve with cognee
- an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
Followed additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
Thank you for the review!
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Thank you for contributing to LangChain!
- [ ] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
- Example: "community: add foobar LLM"
- [ ] **PR message**: ***Delete this entire checklist*** and replace
with
- **Description:** a description of the change
- **Issue:** the issue # it fixes, if applicable
- **Dependencies:** any dependencies required for this change
- **Twitter handle:** if your PR gets announced, and you'd like a
mention, we'll gladly shout you out!
- [ ] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [ ] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
**Description:** Two small changes have been proposed here:
(1)
Previous code assumes that every issue has a priority field. If an issue
lacks this field, the code will raise a KeyError.
Now, the code checks if priority exists before accessing it. If priority
is missing, it assigns None instead of crashing. This prevents runtime
errors when processing issues without a priority.
(2)
Also If the "style" field is missing, the code throws a KeyError.
`.get("style", None)` safely retrieves the value if present.
**Issue:** #29875
**Dependencies:** N/A
Thank you for contributing to LangChain!
- [ ] **Handled query records properly**: "community:
vectorstores/kinetica"
- [ ] **Bugfix for empty query results handling**:
- **Description:** checked for the number of records returned by a query
before processing further
- **Issue:** resulted in an `AttributeError` earlier which has now been
fixed
@efriis
This PR adds documentation for the Azure AI package in Langchain to the
main mono-repo
No issue connected or updated dependencies.
Utilises existing tests and makes updates to the docs
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
**Description:** Update docstring for `reasoning_effort` argument to
specify that it applies to reasoning models only (e.g., OpenAI o1 and
o3-mini), clarifying its supported models.
**Issue:** None
**Dependencies:** None
Adds a `attachment_filter_func` parameter to the ConfluenceLoader class
which can be used to determine which files are indexed. This is useful
if you are interested in excluding files based on their media type or
other metadata.
The build in #29867 is currently broken because `langchain-cli` didn't
add download stats to the provider file.
This change gracefully handles sorting packages with missing download
counts. I initially updated the build to fetch download counts on every
run, but pypistats [requests](https://pypistats.org/api/) that users not
fetch stats like this via CI.
https://docs.x.ai/docs/guides/structured-outputs
Interface appears identical to OpenAI's.
```python
from langchain.chat_models import init_chat_model
from pydantic import BaseModel
class Joke(BaseModel):
setup: str
punchline: str
llm = init_chat_model("xai:grok-2").with_structured_output(
Joke, method="json_schema"
)
llm.invoke("Tell me a joke about cats.")
```
# Description
2 changes:
1. removes get pass from the code example as it reads from stdio causing
a freeze to occur
2. updates to the latest gemini model in the example
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
- **Description:** add deprecation warning when using weaviate from
langchain_community
- **Issue:** NA
- **Dependencies:** NA
- **Twitter handle:** NA
---------
Signed-off-by: hsm207 <hsm207@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Add `model` properties for OpenAIWhisperParser. Defaulted to `whisper-1`
(previous value).
Please help me update the docs and other related components of this
repo.
**Description:**
This PR adds a Jupyter notebook that explains the features,
installation, and usage of the
[`langchain-salesforce`](https://github.com/colesmcintosh/langchain-salesforce)
package. The notebook includes:
- Setup instructions for configuring Salesforce credentials
- Example code demonstrating common operations such as querying,
describing objects, creating, updating, and deleting records
**Issue:**
N/A
**Dependencies:**
No new dependencies are required.
**Tests and Docs:**
- Added an example notebook demonstrating the usage of the
`langchain-salesforce` package, located in `docs/docs/integrations`.
**Lint and Test:**
- Ran `make format`, `make lint`, and `make test` successfully.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Thank you for contributing to LangChain!
- [X] **PR title**: "package: description"
- Where "package" is whichever of langchain, community, core, etc. is
being modified. Use "docs: ..." for purely docs changes, "infra: ..."
for CI changes.
- Example: "community: add foobar LLM"
- [x] **PR message**:
This PR adds top_k as a param to the Needle Retriever. By default we use
top 10.
- [X] **Add tests and docs**: If you're adding a new integration, please
include
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.
- [X] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
Additional guidelines:
- Make sure optional dependencies are imported within a function.
- Please do not add dependencies to pyproject.toml files (even optional
ones) unless they are required for unit tests.
- Most PRs should not touch more than one package.
- Changes should be backwards compatible.
- If you are adding something to community, do not re-import it in
langchain.
If no one reviews your PR within a few days, please @-mention one of
baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.
Thank you for contributing to LangChain!
Rename IBM product name to `IBM watsonx`
- [x] **Lint and test**: Run `make format`, `make lint` and `make test`
from the root of the package(s) you've modified. See contribution
guidelines for more: https://python.langchain.com/docs/contributing/
- ** Description**: I have added a new operator in the operator map with
key `$in` and value `IN`, so that you can define filters using lists as
values. This was already contemplated but as IN operator was not in the
map they cannot be used.
- **Issue**: Fixes#29804.
- **Dependencies**: No extra.
This PR adds documentation for the `langchain-discord-shikenso`
integration, including an example notebook at
`docs/docs/integrations/tools/discord.ipynb` and updates to
`libs/packages.yml` to track the new package.
**Issue:**
N/A
**Dependencies:**
None
**Twitter handle:**
N/A
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
**fix: Correct getpass usage in Google Generative AI Embedding docs
(#29809)**
- **Description:** Corrected the `getpass` usage in the Google
Generative AI Embedding documentation by replacing `getpass()` with
`getpass.getpass()` to fix the `TypeError`.
- **Issue:** #29809
- **Dependencies:** None
**Additional Notes:**
The change ensures compatibility with Google Colab and follows Python's
`getpass` module usage standards.
docs(rag.ipynb) : Add the `full code` snippet, it’s necessary and useful
for beginners to demonstrate.
Preview the change :
https://langchain-git-fork-googtech-patch-3-langchain.vercel.app/docs/tutorials/rag/
Two `full code` snippets are added as below :
<details>
<summary>Full Code:</summary>
```python
import bs4
from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.chat_models import init_chat_model
from langchain_openai import OpenAIEmbeddings
from langchain_core.vectorstores import InMemoryVectorStore
from google.colab import userdata
from langchain_core.prompts import PromptTemplate
from langchain_core.documents import Document
from typing_extensions import List, TypedDict
from langgraph.graph import START, StateGraph
#################################################
# 1.Initialize the ChatModel and EmbeddingModel #
#################################################
llm = init_chat_model(
model="gpt-4o-mini",
model_provider="openai",
openai_api_key=userdata.get('OPENAI_API_KEY'),
base_url=userdata.get('BASE_URL'),
)
embeddings = OpenAIEmbeddings(
model="text-embedding-3-large",
openai_api_key=userdata.get('OPENAI_API_KEY'),
base_url=userdata.get('BASE_URL'),
)
#######################
# 2.Loading documents #
#######################
loader = WebBaseLoader(
web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
bs_kwargs=dict(
# Only keep post title, headers, and content from the full HTML.
parse_only=bs4.SoupStrainer(
class_=("post-content", "post-title", "post-header")
)
),
)
docs = loader.load()
#########################
# 3.Splitting documents #
#########################
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, # chunk size (characters)
chunk_overlap=200, # chunk overlap (characters)
add_start_index=True, # track index in original document
)
all_splits = text_splitter.split_documents(docs)
###########################################################
# 4.Embedding documents and storing them in a vectorstore #
###########################################################
vector_store = InMemoryVectorStore(embeddings)
_ = vector_store.add_documents(documents=all_splits)
##########################################################
# 5.Customizing the prompt or loading it from Prompt Hub #
##########################################################
# prompt = hub.pull("rlm/rag-prompt") # load the prompt from the prompt-hub
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.
{context}
Question: {question}
Helpful Answer:"""
prompt = PromptTemplate.from_template(template)
##################################################################################################
# 5.Using LangGraph to tie together the retrieval and generation steps into a single application # #
##################################################################################################
# 5.1.Define the state of application, which controls the application datas
class State(TypedDict):
question: str
context: List[Document]
answer: str
# 5.2.1.Define the node of application, which signifies the application steps
def retrieve(state: State):
retrieved_docs = vector_store.similarity_search(state["question"])
return {"context": retrieved_docs}
# 5.2.2.Define the node of application, which signifies the application steps
def generate(state: State):
docs_content = "\n\n".join(doc.page_content for doc in state["context"])
messages = prompt.invoke({"question": state["question"], "context": docs_content})
response = llm.invoke(messages)
return {"answer": response.content}
# 6.Define the "control flow" of application, which signifies the ordering of the application steps
graph_builder = StateGraph(State).add_sequence([retrieve, generate])
graph_builder.add_edge(START, "retrieve")
graph = graph_builder.compile()
```
</details>
<details>
<summary>Full Code:</summary>
```python
import bs4
from langchain_community.document_loaders import WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.chat_models import init_chat_model
from langchain_openai import OpenAIEmbeddings
from langchain_core.vectorstores import InMemoryVectorStore
from google.colab import userdata
from langchain_core.prompts import PromptTemplate
from langchain_core.documents import Document
from typing_extensions import List, TypedDict
from langgraph.graph import START, StateGraph
from typing import Literal
from typing_extensions import Annotated
#################################################
# 1.Initialize the ChatModel and EmbeddingModel #
#################################################
llm = init_chat_model(
model="gpt-4o-mini",
model_provider="openai",
openai_api_key=userdata.get('OPENAI_API_KEY'),
base_url=userdata.get('BASE_URL'),
)
embeddings = OpenAIEmbeddings(
model="text-embedding-3-large",
openai_api_key=userdata.get('OPENAI_API_KEY'),
base_url=userdata.get('BASE_URL'),
)
#######################
# 2.Loading documents #
#######################
loader = WebBaseLoader(
web_paths=("https://lilianweng.github.io/posts/2023-06-23-agent/",),
bs_kwargs=dict(
# Only keep post title, headers, and content from the full HTML.
parse_only=bs4.SoupStrainer(
class_=("post-content", "post-title", "post-header")
)
),
)
docs = loader.load()
#########################
# 3.Splitting documents #
#########################
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, # chunk size (characters)
chunk_overlap=200, # chunk overlap (characters)
add_start_index=True, # track index in original document
)
all_splits = text_splitter.split_documents(docs)
# Search analysis: Add some metadata to the documents in our vector store,
# so that we can filter on section later.
total_documents = len(all_splits)
third = total_documents // 3
for i, document in enumerate(all_splits):
if i < third:
document.metadata["section"] = "beginning"
elif i < 2 * third:
document.metadata["section"] = "middle"
else:
document.metadata["section"] = "end"
# Search analysis: Define the schema for our search query
class Search(TypedDict):
query: Annotated[str, ..., "Search query to run."]
section: Annotated[
Literal["beginning", "middle", "end"], ..., "Section to query."]
###########################################################
# 4.Embedding documents and storing them in a vectorstore #
###########################################################
vector_store = InMemoryVectorStore(embeddings)
_ = vector_store.add_documents(documents=all_splits)
##########################################################
# 5.Customizing the prompt or loading it from Prompt Hub #
##########################################################
# prompt = hub.pull("rlm/rag-prompt") # load the prompt from the prompt-hub
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.
{context}
Question: {question}
Helpful Answer:"""
prompt = PromptTemplate.from_template(template)
###################################################################
# 5.Using LangGraph to tie together the analyze_query, retrieval #
# and generation steps into a single application #
###################################################################
# 5.1.Define the state of application, which controls the application datas
class State(TypedDict):
question: str
query: Search
context: List[Document]
answer: str
# Search analysis: Define the node of application,
# which be used to generate a query from the user's raw input
def analyze_query(state: State):
structured_llm = llm.with_structured_output(Search)
query = structured_llm.invoke(state["question"])
return {"query": query}
# 5.2.1.Define the node of application, which signifies the application steps
def retrieve(state: State):
query = state["query"]
retrieved_docs = vector_store.similarity_search(
query["query"],
filter=lambda doc: doc.metadata.get("section") == query["section"],
)
return {"context": retrieved_docs}
# 5.2.2.Define the node of application, which signifies the application steps
def generate(state: State):
docs_content = "\n\n".join(doc.page_content for doc in state["context"])
messages = prompt.invoke({"question": state["question"], "context": docs_content})
response = llm.invoke(messages)
return {"answer": response.content}
# 6.Define the "control flow" of application, which signifies the ordering of the application steps
graph_builder = StateGraph(State).add_sequence([analyze_query, retrieve, generate])
graph_builder.add_edge(START, "analyze_query")
graph = graph_builder.compile()
```
</details>
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>
- [ ] **PR title**: langchain_community: add image support to
DuckDuckGoSearchAPIWrapper
- **Description:** This PR enhances the DuckDuckGoSearchAPIWrapper
within the langchain_community package by introducing support for image
searches. The enhancement includes:
- Adding a new method _ddgs_images to handle image search queries.
- Updating the run and results methods to process and return image
search results appropriately.
- Modifying the source parameter to accept "images" as a valid option,
alongside "text" and "news".
- **Dependencies:** No additional dependencies are required for this
change.