Add caching to BaseChatModel (issue #1644) (#5089)

#  Add caching to BaseChatModel
Fixes #1644

(Sidenote: While testing, I noticed we have multiple implementations of
Fake LLMs, used for testing. I consolidated them.)

## Who can review?
Community members can review the PR once tests pass. Tag
maintainers/contributors who might be interested:
Models
- @hwchase17
- @agola11

Twitter: [@UmerHAdil](https://twitter.com/@UmerHAdil) | Discord:
RicChilligerDude#7589

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
This commit is contained in:
UmerHA
2023-06-24 20:45:09 +02:00
committed by GitHub
parent c289cc891a
commit 068142fce2
11 changed files with 465 additions and 63 deletions

View File

@@ -0,0 +1,9 @@
# Caching
LangChain provides an optional caching layer for Chat Models. This is useful for two reasons:
It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times.
It can speed up your application by reducing the number of API calls you make to the LLM provider.
import CachingChat from "@snippets/modules/model_io/models/chat/how_to/chat_model_caching.mdx"
<CachingChat/>