The security property we care about is whether malicious tools are triggered, not whether malicious-looking strings appear in output. Data may legitimately contain URLs/emails that look suspicious but aren't actionable injections. - Replace string-based assertions with check_triggers_tools() that verifies the sanitized output doesn't trigger target tools when fed back to model - Remove assert_*_blocked functions that checked for domain strings - Simplify INJECTION_TEST_CASES to (payload, tools, tool_name, target_tools)
Packages
Important
This repository is structured as a monorepo, with various packages located in this libs/ directory. Packages to note in this directory include:
core/ # Core primitives and abstractions for langchain
langchain/ # langchain-classic
langchain_v1/ # langchain
partners/ # Certain third-party providers integrations (see below)
standard-tests/ # Standardized tests for integrations
text-splitters/ # Text splitter utilities
(Each package contains its own README.md file with specific details about that package.)
Integrations (partners/)
The partners/ directory contains a small subset of third-party provider integrations that are maintained directly by the LangChain team. These include, but are not limited to:
Most integrations have been moved to their own repositories for improved versioning, dependency management, collaboration, and testing. This includes packages from popular providers such as Google and AWS. Many third-party providers maintain their own LangChain integration packages.
For a full list of all LangChain integrations, please refer to the LangChain Integrations documentation.