From 0b80bec0158a53e8941d09bc442a999ae08d33a3 Mon Sep 17 00:00:00 2001 From: ccurme Date: Fri, 14 Mar 2025 13:09:38 -0400 Subject: [PATCH] docs: fix typo (#30288) --- docs/docs/integrations/chat/anthropic.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/integrations/chat/anthropic.ipynb b/docs/docs/integrations/chat/anthropic.ipynb index d9f2ea00d22..d5970040d4d 100644 --- a/docs/docs/integrations/chat/anthropic.ipynb +++ b/docs/docs/integrations/chat/anthropic.ipynb @@ -525,7 +525,7 @@ "\n", "Prompt caching can be used in [multi-turn conversations](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching#continuing-a-multi-turn-conversation) to maintain context from earlier messages without redundant processing.\n", "\n", - "We can enable incremental caching by marking the final message with `cache_control`. Claude will automatically use the longest previously-cacched prefix for follow-up messages.\n", + "We can enable incremental caching by marking the final message with `cache_control`. Claude will automatically use the longest previously-cached prefix for follow-up messages.\n", "\n", "Below, we implement a simple chatbot that incorporates this feature. We follow the LangChain [chatbot tutorial](/docs/tutorials/chatbot/), but add a custom [reducer](https://langchain-ai.github.io/langgraph/concepts/low_level/#reducers) that automatically marks the last content block in each user message with `cache_control`. See below:" ]