- Fix ToolCallRequest import path (langgraph.prebuilt.tool_node)
- Use Runnable type for model with tools bound (bind_tools returns Runnable)
- Handle args_schema that may be dict or BaseModel
- Sort imports and __all__ in __init__.py
- Add noqa comments for intentional fullwidth Unicode in DeepSeek markers
- Add return type annotations to __init__ methods
- Fix line length issues in prompt strings
- Remove unnecessary else after return
- Add noqa for lazy imports (intentional for avoiding circular imports)
- Fix test_baseline_vulnerability.py parameter count (4 not 5)
Update arg hijacking and add strategy comparison tests to evaluate:
1. CombinedStrategy (CheckTool + ParseData)
2. IntentVerificationStrategy alone
3. All three strategies combined
Tests pass if any strategy successfully defends against the attack,
helping identify which strategies work best for different attack types.
Adds a new defense strategy that detects when tool results attempt to
override user-specified values (e.g., changing email recipients, modifying
subjects). This complements CheckToolStrategy which detects unauthorized
tool calls.
- IntentVerificationStrategy compares tool results against user's original
intent from conversation history
- Uses same marker-based parsing as other strategies for injection safety
- Add create_tool_request_with_user_message() helper for tests
- Update arg hijacking tests to use IntentVerificationStrategy
- Add argument hijacking test cases (BCC injection, subject manipulation,
body append, recipient swap) to test subtle attacks where tool calls are
expected but arguments are manipulated
- Add Google Gemini (gemini-3-flash-preview) to all benchmark tests
- Use granite4:small-h instead of tiny-h for more reliable Ollama tests
- DRY up Ollama config by using constants from conftest
The security property we care about is whether malicious tools are triggered,
not whether malicious-looking strings appear in output. Data may legitimately
contain URLs/emails that look suspicious but aren't actionable injections.
- Replace string-based assertions with check_triggers_tools() that verifies
the sanitized output doesn't trigger target tools when fed back to model
- Remove assert_*_blocked functions that checked for domain strings
- Simplify INJECTION_TEST_CASES to (payload, tools, tool_name, target_tools)
- Add DEFAULT_INJECTION_MARKERS list covering major LLM providers:
- Defense prompt delimiters
- Llama/Mistral: [INST], <<SYS>>
- OpenAI/Qwen (ChatML): <|im_start|>, <|im_end|>
- Anthropic Claude: Human:/Assistant: (with newline prefix)
- DeepSeek: fullwidth Unicode markers
- Google Gemma: <start_of_turn>, <end_of_turn>
- Vicuna: USER:/ASSISTANT:
- Generic XML role markers
- Add sanitize_markers() function to strip injection markers from content
- Add configurable sanitize_markers param to CheckToolStrategy and ParseDataStrategy
- Add filter mode (on_injection='filter') that uses model's text response
when injection detected (no extra LLM call needed)
- Add _get_tool_schema() for tool return type descriptions in ParseDataStrategy
- Export DEFAULT_INJECTION_MARKERS and sanitize_markers from __init__.py
- Rename test_prompt_injection_combined.py -> test_prompt_injection_baseline_vs_protected.py
- Delete redundant test_prompt_injection_defense_extended.py
- Skip E2E tests by default (require RUN_BENCHMARK_TESTS=1)
- Reduce Ollama models to frontier only (granite4:tiny-h)
- Refactor to reduce code duplication in test files
- Update docstrings with cross-references
Test organization:
- test_prompt_injection_defense.py: Unit tests with mocks (CI, fast)
- test_prompt_injection_baseline_vs_protected.py: E2E baseline vs protected
- test_prompt_injection_token_benchmark.py: Token usage benchmarks
To run E2E tests with real models:
RUN_BENCHMARK_TESTS=1 uv run pytest tests/unit_tests/agents/middleware/implementations/test_prompt_injection_* -svv
- test_prompt_injection_combined.py: single test shows both baseline
vulnerability and protected status for each model/payload
- test_prompt_injection_token_benchmark.py: measures token usage across
no_defense, check_only, parse_only, and combined strategies
- Create conftest.py with shared tools, payloads, fixtures, and helpers
- Consolidate extended tests into simple parametrized test classes
- Add multi_language and json_injection to test cases (5 total attacks)
- Baseline and protected tests now use same test cases for comparison
- 65 tests each: 5 attacks × 13 models (3 OpenAI, 3 Anthropic, 7 Ollama)
- Remove unused SystemMessage import
- Fix reference to non-existent e2e test file
- Add anthropic_model to all parameterized security tests
- Now tests both OpenAI (gpt-5.2) and Anthropic (claude-sonnet-4-5)
Implements defense against indirect prompt injection attacks from external/untrusted
data sources (tool results, web content, etc.) based on the paper:
'Defense Against Indirect Prompt Injection via Tool Result Parsing'
https://arxiv.org/html/2601.04795v1
Features:
- Pluggable DefenseStrategy protocol for extensibility
- Built-in strategies: CheckToolStrategy, ParseDataStrategy, CombinedStrategy
- Factory methods for recommended configurations from paper
- Focuses on tool results as primary attack vector
- Extensible to other external data sources in the future
Key components:
- PromptInjectionDefenseMiddleware: Main middleware class
- DefenseStrategy: Protocol for custom defense implementations
- CheckToolStrategy: Detects and removes tool-triggering content
- ParseDataStrategy: Extracts only required data with format constraints
- CombinedStrategy: Chains multiple strategies together
Usage:
agent = create_agent(
'openai:gpt-4o',
middleware=[
PromptInjectionDefenseMiddleware.check_then_parse('openai:gpt-4o'),
],
)
Implementations using yield return generators, which are of type
`Iterator`.
This is technically a breaking change for implementers, however, known
existing implementations (in `langchain-community`) use `yield`, so they
already return `Iterator`s. For callers, it is not breaking.
Closes#25718
it looks scary but i promise it is not
improving documentation consistency across core. primarily update
docstrings and comments for better formatting, readability, and
accuracy, as well as add minor clarifications and formatting
improvements to user-facing documentation.
it looks scary but i promise it is not
improving documentation consistency across langchain. primarily update
docstrings and comments for better formatting, readability, and
accuracy, as well as add minor clarifications and formatting
improvements to user-facing documentation.