## Problem ChatLiteLLM encounters a `ValidationError` when using cache on subsequent calls, causing the following error: ``` ValidationError(model='ChatResult', errors=[{'loc': ('generations', 0, 'type'), 'msg': "unexpected value; permitted: 'ChatGeneration'", 'type': 'value_error.const', 'ctx': {'given': 'Generation', 'permitted': ('ChatGeneration',)}}]) ``` This occurs because: 1. The cache stores `Generation` objects (with `type="Generation"`) 2. But `ChatResult` expects `ChatGeneration` objects (with `type="ChatGeneration"` and a required `message` field) 3. When cached values are retrieved, validation fails due to the type mismatch ## Solution Added graceful handling in both sync (`_generate_with_cache`) and async (`_agenerate_with_cache`) cache methods to: 1. **Detect** when cached values contain `Generation` objects instead of expected `ChatGeneration` objects 2. **Convert** them to `ChatGeneration` objects by wrapping the text content in an `AIMessage` 3. **Preserve** all original metadata (`generation_info`) 4. **Allow** `ChatResult` creation to succeed without validation errors ## Example ```python # Before: This would fail with ValidationError from langchain_community.chat_models import ChatLiteLLM from langchain_community.cache import SQLiteCache from langchain.globals import set_llm_cache set_llm_cache(SQLiteCache(database_path="cache.db")) llm = ChatLiteLLM(model_name="openai/gpt-4o", cache=True, temperature=0) print(llm.predict("test")) # Works fine (cache empty) print(llm.predict("test")) # Now works instead of ValidationError # After: Seamlessly handles both Generation and ChatGeneration objects ``` ## Changes - **`libs/core/langchain_core/language_models/chat_models.py`**: - Added `Generation` import from `langchain_core.outputs` - Enhanced cache retrieval logic in `_generate_with_cache` and `_agenerate_with_cache` methods - Added conversion from `Generation` to `ChatGeneration` objects when needed - **`libs/core/tests/unit_tests/language_models/chat_models/test_cache.py`**: - Added test case to validate the conversion logic handles mixed object types ## Impact - **Backward Compatible**: Existing code continues to work unchanged - **Minimal Change**: Only affects cache retrieval path, no API changes - **Robust**: Handles both legacy cached `Generation` objects and new `ChatGeneration` objects - **Preserves Data**: All original content and metadata is maintained during conversion Fixes #22389. <!-- START COPILOT CODING AGENT TIPS --> --- 💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more [Copilot coding agent tips](https://gh.io/copilot-coding-agent-tips) in the docs. --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com> Co-authored-by: Mason Daugherty <github@mdrxy.com> Co-authored-by: Mason Daugherty <mason@langchain.dev> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |
||
---|---|---|
.devcontainer | ||
.github | ||
.vscode | ||
cookbook | ||
docs | ||
libs | ||
scripts | ||
.editorconfig | ||
.gitattributes | ||
.gitignore | ||
.markdownlint.json | ||
.pre-commit-config.yaml | ||
.readthedocs.yaml | ||
CITATION.cff | ||
LICENSE | ||
Makefile | ||
MIGRATE.md | ||
poetry.toml | ||
pyproject.toml | ||
README.md | ||
SECURITY.md | ||
uv.lock | ||
yarn.lock |
Note
Looking for the JS/TS library? Check out LangChain.js.
LangChain is a framework for building LLM-powered applications. It helps you chain together interoperable components and third-party integrations to simplify AI application development — all while future-proofing decisions as the underlying technology evolves.
pip install -U langchain
To learn more about LangChain, check out the docs. If you’re looking for more advanced customization or agent orchestration, check out LangGraph, our framework for building controllable agent workflows.
Why use LangChain?
LangChain helps developers build applications powered by LLMs through a standard interface for models, embeddings, vector stores, and more.
Use LangChain for:
- Real-time data augmentation. Easily connect LLMs to diverse data sources and external / internal systems, drawing from LangChain’s vast library of integrations with model providers, tools, vector stores, retrievers, and more.
- Model interoperability. Swap models in and out as your engineering team experiments to find the best choice for your application’s needs. As the industry frontier evolves, adapt quickly — LangChain’s abstractions keep you moving without losing momentum.
LangChain’s ecosystem
While the LangChain framework can be used standalone, it also integrates seamlessly with any LangChain product, giving developers a full suite of tools when building LLM applications.
To improve your LLM application development, pair LangChain with:
- LangSmith - Helpful for agent evals and observability. Debug poor-performing LLM app runs, evaluate agent trajectories, gain visibility in production, and improve performance over time.
- LangGraph - Build agents that can reliably handle complex tasks with LangGraph, our low-level agent orchestration framework. LangGraph offers customizable architecture, long-term memory, and human-in-the-loop workflows — and is trusted in production by companies like LinkedIn, Uber, Klarna, and GitLab.
- LangGraph Platform - Deploy and scale agents effortlessly with a purpose-built deployment platform for long running, stateful workflows. Discover, reuse, configure, and share agents across teams — and iterate quickly with visual prototyping in LangGraph Studio.
Additional resources
- Tutorials: Simple walkthroughs with guided examples on getting started with LangChain.
- How-to Guides: Quick, actionable code snippets for topics such as tool calling, RAG use cases, and more.
- Conceptual Guides: Explanations of key concepts behind the LangChain framework.
- LangChain Forum: Connect with the community and share all of your technical questions, ideas, and feedback.
- API Reference: Detailed reference on navigating base packages and integrations for LangChain.