The security property we care about is whether malicious tools are triggered,
not whether malicious-looking strings appear in output. Data may legitimately
contain URLs/emails that look suspicious but aren't actionable injections.
- Replace string-based assertions with check_triggers_tools() that verifies
the sanitized output doesn't trigger target tools when fed back to model
- Remove assert_*_blocked functions that checked for domain strings
- Simplify INJECTION_TEST_CASES to (payload, tools, tool_name, target_tools)
- Add DEFAULT_INJECTION_MARKERS list covering major LLM providers:
- Defense prompt delimiters
- Llama/Mistral: [INST], <<SYS>>
- OpenAI/Qwen (ChatML): <|im_start|>, <|im_end|>
- Anthropic Claude: Human:/Assistant: (with newline prefix)
- DeepSeek: fullwidth Unicode markers
- Google Gemma: <start_of_turn>, <end_of_turn>
- Vicuna: USER:/ASSISTANT:
- Generic XML role markers
- Add sanitize_markers() function to strip injection markers from content
- Add configurable sanitize_markers param to CheckToolStrategy and ParseDataStrategy
- Add filter mode (on_injection='filter') that uses model's text response
when injection detected (no extra LLM call needed)
- Add _get_tool_schema() for tool return type descriptions in ParseDataStrategy
- Export DEFAULT_INJECTION_MARKERS and sanitize_markers from __init__.py
- Rename test_prompt_injection_combined.py -> test_prompt_injection_baseline_vs_protected.py
- Delete redundant test_prompt_injection_defense_extended.py
- Skip E2E tests by default (require RUN_BENCHMARK_TESTS=1)
- Reduce Ollama models to frontier only (granite4:tiny-h)
- Refactor to reduce code duplication in test files
- Update docstrings with cross-references
Test organization:
- test_prompt_injection_defense.py: Unit tests with mocks (CI, fast)
- test_prompt_injection_baseline_vs_protected.py: E2E baseline vs protected
- test_prompt_injection_token_benchmark.py: Token usage benchmarks
To run E2E tests with real models:
RUN_BENCHMARK_TESTS=1 uv run pytest tests/unit_tests/agents/middleware/implementations/test_prompt_injection_* -svv
- test_prompt_injection_combined.py: single test shows both baseline
vulnerability and protected status for each model/payload
- test_prompt_injection_token_benchmark.py: measures token usage across
no_defense, check_only, parse_only, and combined strategies
- Create conftest.py with shared tools, payloads, fixtures, and helpers
- Consolidate extended tests into simple parametrized test classes
- Add multi_language and json_injection to test cases (5 total attacks)
- Baseline and protected tests now use same test cases for comparison
- 65 tests each: 5 attacks × 13 models (3 OpenAI, 3 Anthropic, 7 Ollama)
- Remove unused SystemMessage import
- Fix reference to non-existent e2e test file
- Add anthropic_model to all parameterized security tests
- Now tests both OpenAI (gpt-5.2) and Anthropic (claude-sonnet-4-5)
Implements defense against indirect prompt injection attacks from external/untrusted
data sources (tool results, web content, etc.) based on the paper:
'Defense Against Indirect Prompt Injection via Tool Result Parsing'
https://arxiv.org/html/2601.04795v1
Features:
- Pluggable DefenseStrategy protocol for extensibility
- Built-in strategies: CheckToolStrategy, ParseDataStrategy, CombinedStrategy
- Factory methods for recommended configurations from paper
- Focuses on tool results as primary attack vector
- Extensible to other external data sources in the future
Key components:
- PromptInjectionDefenseMiddleware: Main middleware class
- DefenseStrategy: Protocol for custom defense implementations
- CheckToolStrategy: Detects and removes tool-triggering content
- ParseDataStrategy: Extracts only required data with format constraints
- CombinedStrategy: Chains multiple strategies together
Usage:
agent = create_agent(
'openai:gpt-4o',
middleware=[
PromptInjectionDefenseMiddleware.check_then_parse('openai:gpt-4o'),
],
)
Implementations using yield return generators, which are of type
`Iterator`.
This is technically a breaking change for implementers, however, known
existing implementations (in `langchain-community`) use `yield`, so they
already return `Iterator`s. For callers, it is not breaking.
Closes#25718
it looks scary but i promise it is not
improving documentation consistency across core. primarily update
docstrings and comments for better formatting, readability, and
accuracy, as well as add minor clarifications and formatting
improvements to user-facing documentation.
it looks scary but i promise it is not
improving documentation consistency across langchain. primarily update
docstrings and comments for better formatting, readability, and
accuracy, as well as add minor clarifications and formatting
improvements to user-facing documentation.
Update broken contributing links in AGENTS.md and CLAUDE.md
Description:
Update internal references from .github/CONTRIBUTING.md to [Contributing
Guide] to fix navigation issues for local contributors.
Proposed Changes
AGENTS.md: Change [.github/CONTRIBUTING.md] link text to [Contributing
Guide] since the file path is not present in the local root.
CLAUDE.md: Change [.github/CONTRIBUTING.md] link text to [Contributing
Guide] for consistency.
Reasoning:
Users following these docs locally often find the specific file path
.github/CONTRIBUTING.md confusing or "broken" in markdown previews that
don't resolve the .github hidden directory correctly. Using the
descriptive label "Contributing Guide" is more user-friendly and
standard across the repo.
Checklist:
[x] Run make format, make lint and make test (N/A for these doc-only
changes).
[x] PR title follows the format: TYPE(SCOPE): DESCRIPTION.
[x] I have read the [Contributing
Guide](https://www.google.com/search?q=https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md).
(Replace this entire block of text)
Read the full contributing guidelines:
https://docs.langchain.com/oss/python/contributing/overview
Thank you for contributing to LangChain! Follow these steps to have your
pull request considered as ready for review.
1. PR title: Should follow the format: TYPE(SCOPE): DESCRIPTION
- Examples:
- fix(anthropic): resolve flag parsing error
- feat(core): add multi-tenant support
- test(openai): update API usage tests
- Allowed TYPE and SCOPE values:
https://github.com/langchain-ai/langchain/blob/master/.github/workflows/pr_lint.yml#L15-L33
2. PR description:
- Write 1-2 sentences summarizing the change.
- If this PR addresses a specific issue, please include "Fixes
#ISSUE_NUMBER" in the description to automatically close the issue when
the PR is merged.
- If there are any breaking changes, please clearly describe them.
- If this PR depends on another PR being merged first, please include
"Depends on #PR_NUMBER" in the description.
3. Run `make format`, `make lint` and `make test` from the root of the
package(s) you've modified.
- We will not consider a PR unless these three are passing in CI.
Additional guidelines:
- We ask that if you use generative AI for your contribution, you
include a disclaimer.
- PRs should not touch more than one package unless absolutely
necessary.
- Do not update the `uv.lock` files or add dependencies to
`pyproject.toml` files (even optional ones) unless you have explicit
permission to do so by a maintainer.
…tring
(Replace this entire block of text)
Read the full contributing guidelines:
https://docs.langchain.com/oss/python/contributing/overview
Thank you for contributing to LangChain! Follow these steps to have your
pull request considered as ready for review.
1. PR title: Should follow the format: TYPE(SCOPE): DESCRIPTION
- Examples:
- fix(anthropic): resolve flag parsing error
- feat(core): add multi-tenant support
- test(openai): update API usage tests
- Allowed TYPE and SCOPE values:
https://github.com/langchain-ai/langchain/blob/master/.github/workflows/pr_lint.yml#L15-L33
2. PR description:
- Write 1-2 sentences summarizing the change.
- If this PR addresses a specific issue, please include "Fixes
#ISSUE_NUMBER" in the description to automatically close the issue when
the PR is merged.
- If there are any breaking changes, please clearly describe them.
- If this PR depends on another PR being merged first, please include
"Depends on #PR_NUMBER" in the description.
3. Run `make format`, `make lint` and `make test` from the root of the
package(s) you've modified.
- We will not consider a PR unless these three are passing in CI.
Additional guidelines:
- We ask that if you use generative AI for your contribution, you
include a disclaimer.
- PRs should not touch more than one package unless absolutely
necessary.
- Do not update the `uv.lock` files or add dependencies to
`pyproject.toml` files (even optional ones) unless you have explicit
permission to do so by a maintainer.
dependent upon https://github.com/langchain-ai/langgraph/pull/6711
1. relax constraint in `factory.py` to allow for tools not
pre-registered in the `ModelRequest.tools` list
2. always add tool node if `wrap_tool_call` or `awrap_tool_call` is
implemented
3. add tests confirming you can register new tools at runtime in
`wrap_model_call` and execute them via `wrap_tool_call`
allows for the following pattern
```py
from langchain_core.messages import HumanMessage, ToolMessage
from langchain_core.tools import tool
from libs.langchain_v1.langchain.agents.factory import create_agent
from libs.langchain_v1.langchain.agents.middleware.types import (
AgentMiddleware,
ModelRequest,
ToolCallRequest,
)
@tool
def get_weather(location: str) -> str:
"""Get the current weather for a location."""
return f"The weather in {location} is sunny and 72°F."
@tool
def calculate_tip(bill_amount: float, tip_percentage: float = 20.0) -> str:
"""Calculate the tip amount for a bill."""
tip = bill_amount * (tip_percentage / 100)
return f"Tip: ${tip:.2f}, Total: ${bill_amount + tip:.2f}"
class DynamicToolMiddleware(AgentMiddleware):
"""Middleware that adds and handles a dynamic tool."""
def wrap_model_call(self, request: ModelRequest, handler):
updated = request.override(tools=[*request.tools, calculate_tip])
return handler(updated)
def wrap_tool_call(self, request: ToolCallRequest, handler):
if request.tool_call["name"] == "calculate_tip":
return handler(request.override(tool=calculate_tip))
return handler(request)
agent = create_agent(model="openai:gpt-4o-mini", tools=[get_weather], middleware=[DynamicToolMiddleware()])
result = agent.invoke({
"messages": [HumanMessage("What's the weather in NYC? Also calculate a 20% tip on a $85 bill")]
})
for msg in result["messages"]:
msg.pretty_print()
```