docs: fix admonition formatting (#26801)

This commit is contained in:
Erick Friis 2024-09-23 21:55:17 -07:00 committed by GitHub
parent 603d38f06d
commit 35081d2765
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
36 changed files with 68 additions and 75 deletions

View File

@ -17,7 +17,7 @@
"source": [
"# Build an Agent with AgentExecutor (Legacy)\n",
"\n",
":::{.callout-important}\n",
":::important\n",
"This section will cover building with the legacy LangChain AgentExecutor. These are fine for getting started, but past a certain point, you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph Agents](/docs/concepts/#langgraph) or the [migration guide](/docs/how_to/migrate_agent/)\n",
":::\n",
"\n",
@ -805,7 +805,7 @@
"\n",
"That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! \n",
"\n",
":::{.callout-important}\n",
":::important\n",
"This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd reccommend checking out [LangGraph](/docs/concepts/#langgraph)\n",
":::\n",
"\n",

View File

@ -17,11 +17,11 @@
"If you are planning to use the async APIs, it is recommended to use and extend [`AsyncCallbackHandler`](https://python.langchain.com/api_reference/core/callbacks/langchain_core.callbacks.base.AsyncCallbackHandler.html) to avoid blocking the event.\n",
"\n",
"\n",
":::{.callout-warning}\n",
":::warning\n",
"If you use a sync `CallbackHandler` while using an async method to run your LLM / Chain / Tool / Agent, it will still work. However, under the hood, it will be called with [`run_in_executor`](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.run_in_executor) which can cause issues if your `CallbackHandler` is not thread-safe.\n",
":::\n",
"\n",
":::{.callout-danger}\n",
":::danger\n",
"\n",
"If you're on `python<=3.10`, you need to remember to propagate `config` or `callbacks` when invoking other `runnable` from within a `RunnableLambda`, `RunnableGenerator` or `@tool`. If you do not do this,\n",
"the callbacks will not be propagated to the child runnables being invoked.\n",

View File

@ -19,7 +19,7 @@
"\n",
"If you are composing a chain of runnables and want to reuse callbacks across multiple executions, you can attach callbacks with the [`.with_config()`](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.with_config) method. This saves you the need to pass callbacks in each time you invoke the chain.\n",
"\n",
":::{.callout-important}\n",
":::important\n",
"\n",
"`with_config()` binds a configuration which will be interpreted as **runtime** configuration. So these callbacks will propagate to all child components.\n",
":::\n",

View File

@ -17,7 +17,7 @@
"\n",
"Most LangChain modules allow you to pass `callbacks` directly into the constructor (i.e., initializer). In this case, the callbacks will only be called for that instance (and any nested runs).\n",
"\n",
":::{.callout-warning}\n",
":::warning\n",
"Constructor callbacks are scoped only to the object they are defined on. They are **not** inherited by children of the object. This can lead to confusing behavior,\n",
"and it's generally better to pass callbacks as a run time argument.\n",
":::\n",

View File

@ -29,7 +29,7 @@
"| data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. |\n",
"\n",
"\n",
":::{.callout-important}\n",
":::important\n",
"* Dispatching custom callback events requires `langchain-core>=0.2.15`.\n",
"* Custom callback events can only be dispatched from within an existing `Runnable`.\n",
"* If using `astream_events`, you must use `version='v2'` to see custom events.\n",
@ -69,7 +69,7 @@
"We can use the `async` `adispatch_custom_event` API to emit custom events in an async setting. \n",
"\n",
"\n",
":::{.callout-important}\n",
":::important\n",
"\n",
"To see custom events via the astream events API, you need to use the newer `v2` API of `astream_events`.\n",
":::"

View File

@ -22,7 +22,7 @@
"\n",
"The **default** streaming implementation provides an`Iterator` (or `AsyncIterator` for asynchronous streaming) that yields a single value: the final output from the underlying chat model provider.\n",
"\n",
":::{.callout-tip}\n",
":::tip\n",
"\n",
"The **default** implementation does **not** provide support for token-by-token streaming, but it ensures that the the model can be swapped in for any other model as it supports the same standard interface.\n",
"\n",

View File

@ -39,7 +39,7 @@
"| `AIMessageChunk` / `HumanMessageChunk` / ... | Chunk variant of each type of message. |\n",
"\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"`ToolMessage` and `FunctionMessage` closely follow OpenAI's `function` and `tool` roles.\n",
"\n",
"This is a rapidly developing field and as more models add function calling capabilities. Expect that there will be additions to this schema.\n",
@ -145,7 +145,7 @@
"| `_astream` | Use to implement async version of `_stream`. | Optional |\n",
"\n",
"\n",
":::{.callout-tip}\n",
":::tip\n",
"The `_astream` implementation uses `run_in_executor` to launch the sync `_stream` in a separate thread if `_stream` is implemented, otherwise it fallsback to use `_agenerate`.\n",
"\n",
"You can use this trick if you want to reuse the `_stream` implementation, but if you're able to implement code that's natively async that's a better solution since that code will run with less overhead.\n",

View File

@ -37,12 +37,12 @@
"\n",
"The logic inside of `_get_relevant_documents` can involve arbitrary calls to a database or to the web using requests.\n",
"\n",
":::{.callout-tip}\n",
":::tip\n",
"By inherting from `BaseRetriever`, your retriever automatically becomes a LangChain [Runnable](/docs/concepts#interface) and will gain the standard `Runnable` functionality out of the box!\n",
":::\n",
"\n",
"\n",
":::{.callout-info}\n",
":::info\n",
"You can use a `RunnableLambda` or `RunnableGenerator` to implement a retriever.\n",
"\n",
"The main benefit of implementing a retriever as a `BaseRetriever` vs. a `RunnableLambda` (a custom [runnable function](/docs/how_to/functions)) is that a `BaseRetriever` is a well\n",

View File

@ -26,7 +26,7 @@
"\n",
"In this guide we provide an overview of these methods.\n",
"\n",
":::{.callout-tip}\n",
":::tip\n",
"\n",
"Models will perform better if the tools have well chosen names, descriptions and JSON schemas.\n",
":::"
@ -293,7 +293,7 @@
"id": "f18a2503-5393-421b-99fa-4a01dd824d0e",
"metadata": {},
"source": [
":::{.callout-caution}\n",
":::caution\n",
"By default, `@tool(parse_docstring=True)` will raise `ValueError` if the docstring does not parse correctly. See [API Reference](https://python.langchain.com/api_reference/core/tools/langchain_core.tools.tool.html) for detail and examples.\n",
":::"
]

View File

@ -63,7 +63,7 @@
"* The `load` methods is a convenience method meant solely for prototyping work -- it just invokes `list(self.lazy_load())`.\n",
"* The `alazy_load` has a default implementation that will delegate to `lazy_load`. If you're using async, we recommend overriding the default implementation and providing a native async implementation.\n",
"\n",
":::{.callout-important}\n",
":::important\n",
"When implementing a document loader do **NOT** provide parameters via the `lazy_load` or `alazy_load` methods.\n",
"\n",
"All configuration is expected to be passed through the initializer (__init__). This was a design choice made by LangChain to make sure that once a document loader has been instantiated it has all the information needed to load documents.\n",
@ -235,7 +235,7 @@
"id": "56cb443e-f987-4386-b4ec-975ee129adb2",
"metadata": {},
"source": [
":::{.callout-tip}\n",
":::tip\n",
"\n",
"`load()` can be helpful in an interactive environment such as a jupyter notebook.\n",
"\n",

View File

@ -11,7 +11,7 @@
"\n",
"Data extraction attempts to generate structured representations of information found in text and other unstructured or semi-structured formats. [Tool-calling](/docs/concepts#functiontool-calling) LLM features are often used in this context. This guide demonstrates how to build few-shot examples of tool calls to help steer the behavior of extraction and similar applications.\n",
"\n",
":::{.callout-tip}\n",
":::tip\n",
"While this guide focuses how to use examples with a tool calling model, this technique is generally applicable, and will work\n",
"also with JSON more or prompt based techniques.\n",
":::\n",
@ -172,7 +172,7 @@
"\n",
"Each example contains an example `input` text and an example `output` showing what should be extracted from the text.\n",
"\n",
":::{.callout-important}\n",
":::important\n",
"This is a bit in the weeds, so feel free to skip.\n",
"\n",
"The format of the example needs to match the API used (e.g., tool calling or JSON mode etc.).\n",

View File

@ -291,7 +291,7 @@
"source": [
"Use [batch](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html) functionality to run the extraction in **parallel** across each chunk! \n",
"\n",
":::{.callout-tip}\n",
":::tip\n",
"You can often use .batch() to parallelize the extractions! `.batch` uses a threadpool under the hood to help you parallelize workloads.\n",
"\n",
"If your model is exposed via an API, this will likely speed up your extraction flow!\n",
@ -382,7 +382,7 @@
"\n",
"Another simple idea is to chunk up the text, but instead of extracting information from every chunk, just focus on the the most relevant chunks.\n",
"\n",
":::{.callout-caution}\n",
":::caution\n",
"It can be difficult to identify which chunks are relevant.\n",
"\n",
"For example, in the `car` article we're using here, most of the article contains key development information. So by using\n",

View File

@ -52,7 +52,7 @@
"id": "3e412374-3beb-4bbf-966b-400c1f66a258",
"metadata": {},
"source": [
":::{.callout-tip}\n",
":::tip\n",
"This tutorial is meant to be simple, but generally should really include reference examples to squeeze out performance!\n",
":::"
]

View File

@ -303,7 +303,7 @@
"source": [
"## Streaming\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"[RunnableLambda](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableLambda.html) is best suited for code that does not need to support streaming. If you need to support streaming (i.e., be able to operate on chunks of inputs and yield chunks of outputs), use [RunnableGenerator](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.RunnableGenerator.html) instead as in the example below.\n",
":::\n",
"\n",

View File

@ -30,7 +30,7 @@
"\n",
"Let's first look at an extremely simple example of tracking token usage for a single Chat model call.\n",
"\n",
":::{.callout-danger}\n",
":::danger\n",
"\n",
"The callback handler does not currently support streaming token counts for legacy language models (e.g., `langchain_openai.OpenAI`). For support in a streaming context, refer to the corresponding guide for chat models [here](/docs/how_to/chat_token_usage_tracking).\n",
"\n",
@ -162,7 +162,7 @@
"source": [
"## Streaming\n",
"\n",
":::{.callout-danger}\n",
":::danger\n",
"\n",
"`get_openai_callback` does not currently support streaming token counts for legacy language models (e.g., `langchain_openai.OpenAI`). If you want to count tokens correctly in a streaming context, there are a number of options:\n",
"\n",

View File

@ -71,7 +71,7 @@
"id": "eed8baf2-f4c2-44c1-b47d-e9f560af6202",
"metadata": {},
"source": [
":::{.callout-tip}\n",
":::tip\n",
"\n",
"LCEL automatically upgrades the function `parse` to `RunnableLambda(parse)` when composed using a `|` syntax.\n",
"\n",
@ -140,7 +140,7 @@
"id": "62192808-c7e1-4b3a-85f4-b7901de7c0b8",
"metadata": {},
"source": [
":::{.callout-important}\n",
":::important\n",
"\n",
"Please wrap the streaming parser in `RunnableGenerator` as we may stop automatically upgrading it with the `|` syntax.\n",
":::"
@ -219,7 +219,7 @@
"\n",
"When the output from the chat model or LLM is malformed, the can throw an `OutputParserException` to indicate that parsing fails because of bad input. Using this exception allows code that utilizes the parser to handle the exceptions in a consistent manner.\n",
"\n",
":::{.callout-tip} Parsers are Runnables! 🏃\n",
":::tip Parsers are Runnables! 🏃\n",
"\n",
"Because `BaseOutputParser` implements the `Runnable` interface, any custom parser you will create this way will become valid LangChain Runnables and will benefit from automatic async support, batch interface, logging support etc.\n",
":::\n"
@ -458,7 +458,7 @@
"id": "18f83192-37e8-43f5-ab29-9568b1279f1b",
"metadata": {},
"source": [
":::{.callout-note}\n",
":::note\n",
"The parser will work with either the output from an LLM (a string) or the output from a chat model (an `AIMessage`)!\n",
":::"
]

View File

@ -20,7 +20,7 @@
"\n",
"While some model providers support [built-in ways to return structured output](/docs/how_to/structured_output), not all do. We can use an output parser to help users to specify an arbitrary JSON schema via the prompt, query a model for outputs that conform to that schema, and finally parse that schema as JSON.\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"Keep in mind that large language models are leaky abstractions! You'll have to use an LLM with sufficient capacity to generate well-formed JSON.\n",
":::"
]

View File

@ -22,7 +22,7 @@
"\n",
"This guide shows you how to use the [`XMLOutputParser`](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.xml.XMLOutputParser.html) to prompt models for XML output, then and parse that output into a usable format.\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"Keep in mind that large language models are leaky abstractions! You'll have to use an LLM with sufficient capacity to generate well-formed XML.\n",
":::\n",
"\n",

View File

@ -22,7 +22,7 @@
"\n",
"This output parser allows users to specify an arbitrary schema and query LLMs for outputs that conform to that schema, using YAML to format their response.\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"Keep in mind that large language models are leaky abstractions! You'll have to use an LLM with sufficient capacity to generate well-formed YAML.\n",
":::\n"
]

View File

@ -118,7 +118,7 @@
"id": "392cd4c4-e7ed-4ab8-934d-f7a4eca55ee1",
"metadata": {},
"source": [
":::{.callout-tip}\n",
":::tip\n",
"Note that when composing a RunnableParallel with another Runnable we don't even need to wrap our dictionary in the RunnableParallel class — the type conversion is handled for us. In the context of a chain, these are equivalent:\n",
":::\n",
"\n",

View File

@ -312,7 +312,7 @@
"id": "b437da5d-ca09-4d15-9be2-c35e5a1ace77",
"metadata": {},
"source": [
":::{.callout-tip}\n",
":::tip\n",
"\n",
"Check out the [LangSmith trace](https://smith.langchain.com/public/1c055a3b-0236-4670-a3fb-023d418ba796/r)\n",
"\n",
@ -410,7 +410,7 @@
"id": "7440f785-29c5-4c6b-9656-0d9d5efbac05",
"metadata": {},
"source": [
":::{.callout-tip}\n",
":::tip\n",
"\n",
"View [LangSmith trace](https://smith.langchain.com/public/0eeddf06-3a7b-4f27-974c-310ca8160f60/r)\n",
"\n",

View File

@ -18,7 +18,7 @@
"\n",
"Below we walk through an example with a simple [LLM chain](/docs/tutorials/llm_chain).\n",
"\n",
":::{.callout-caution}\n",
":::caution\n",
"\n",
"De-serialization using `load` and `loads` can instantiate any serializable LangChain object. Only use this feature with trusted inputs!\n",
"\n",

View File

@ -17,7 +17,7 @@
"source": [
"## tiktoken\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"[tiktoken](https://github.com/openai/tiktoken) is a fast `BPE` tokenizer created by `OpenAI`.\n",
":::\n",
"\n",
@ -171,7 +171,7 @@
"source": [
"## spaCy\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"[spaCy](https://spacy.io/) is an open-source software library for advanced natural language processing, written in the programming languages Python and Cython.\n",
":::\n",
"\n",
@ -363,7 +363,7 @@
"source": [
"## NLTK\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"[The Natural Language Toolkit](https://en.wikipedia.org/wiki/Natural_Language_Toolkit), or more commonly [NLTK](https://www.nltk.org/), is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for English written in the Python programming language.\n",
":::\n",
"\n",
@ -466,7 +466,7 @@
"source": [
"## KoNLPY\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"[KoNLPy: Korean NLP in Python](https://konlpy.org/en/latest/) is is a Python package for natural language processing (NLP) of the Korean language.\n",
":::\n",
"\n",

View File

@ -248,7 +248,7 @@
"\n",
"We will use [`StrOutputParser`](https://python.langchain.com/api_reference/core/output_parsers/langchain_core.output_parsers.string.StrOutputParser.html) to parse the output from the model. This is a simple parser that extracts the `content` field from an `AIMessageChunk`, giving us the `token` returned by the model.\n",
"\n",
":::{.callout-tip}\n",
":::tip\n",
"LCEL is a *declarative* way to specify a \"program\" by chainining together different LangChain primitives. Chains created using LCEL benefit from an automatic implementation of `stream` and `astream` allowing streaming of the final output. In fact, chains created with LCEL implement the entire standard Runnable interface.\n",
":::"
]
@ -306,7 +306,7 @@
"id": "1b399fb4-5e3c-4581-9570-6df9b42b623d",
"metadata": {},
"source": [
":::{.callout-note}\n",
":::note\n",
"The LangChain Expression language allows you to separate the construction of a chain from the mode in which it is used (e.g., sync/async, batch/streaming etc.). If this is not relevant to what you're building, you can also rely on a standard **imperative** programming approach by\n",
"caling `invoke`, `batch` or `stream` on each component individually, assigning the results to variables and then using them downstream as you see fit.\n",
"\n",
@ -385,11 +385,11 @@
"source": [
"Now, let's **break** streaming. We'll use the previous example and append an extraction function at the end that extracts the country names from the finalized JSON.\n",
"\n",
":::{.callout-warning}\n",
":::warning\n",
"Any steps in the chain that operate on **finalized inputs** rather than on **input streams** can break streaming functionality via `stream` or `astream`.\n",
":::\n",
"\n",
":::{.callout-tip}\n",
":::tip\n",
"Later, we will discuss the `astream_events` API which streams results from intermediate steps. This API will stream results from intermediate steps even if the chain contains steps that only operate on **finalized inputs**.\n",
":::"
]
@ -454,7 +454,7 @@
"\n",
"Let's fix the streaming using a generator function that can operate on the **input stream**.\n",
"\n",
":::{.callout-tip}\n",
":::tip\n",
"A generator function (a function that uses `yield`) allows writing code that operates on **input streams**\n",
":::"
]
@ -585,7 +585,7 @@
"\n",
"This is OK 🥹! Not all components have to implement streaming -- in some cases streaming is either unnecessary, difficult or just doesn't make sense.\n",
"\n",
":::{.callout-tip}\n",
":::tip\n",
"An LCEL chain constructed using non-streaming components, will still be able to stream in a lot of cases, with streaming of partial output starting after the last non-streaming step in the chain.\n",
":::"
]
@ -654,7 +654,7 @@
"\n",
"Event Streaming is a **beta** API. This API may change a bit based on feedback.\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"\n",
"This guide demonstrates the `V2` API and requires langchain-core >= 0.2. For the `V1` API compatible with older versions of LangChain, see [here](https://python.langchain.com/v0.1/docs/expression_language/streaming/#using-stream-events).\n",
":::"
@ -748,7 +748,7 @@
"id": "32972939-2995-4b2e-84db-045adb044fad",
"metadata": {},
"source": [
":::{.callout-note}\n",
":::note\n",
"\n",
"Hey what's that funny version=\"v2\" parameter in the API?! 😾\n",
"\n",
@ -1129,7 +1129,7 @@
"source": [
"#### By Tags\n",
"\n",
":::{.callout-caution}\n",
":::caution\n",
"\n",
"Tags are inherited by child components of a given runnable. \n",
"\n",
@ -1336,11 +1336,11 @@
"source": [
"### Propagating Callbacks\n",
"\n",
":::{.callout-caution}\n",
":::caution\n",
"If you're using invoking runnables inside your tools, you need to propagate callbacks to the runnable; otherwise, no stream events will be generated.\n",
":::\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"When using `RunnableLambdas` or `@chain` decorator, callbacks are propagated automatically behind the scenes.\n",
":::"
]

View File

@ -17,7 +17,7 @@
"\n",
"\n",
"\n",
":::{.callout-note}\n",
":::note\n",
"\n",
"The **default** implementation does **not** provide support for token-by-token streaming, but it ensures that the model can be swapped in for any other model as it supports the same standard interface.\n",
"\n",
@ -114,7 +114,7 @@
"\n",
"LLMs also support the standard [astream events](https://python.langchain.com/api_reference/core/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable.astream_events) method.\n",
"\n",
":::{.callout-tip}\n",
":::tip\n",
"\n",
"`astream_events` is most useful when implementing streaming in a larger LLM application that contains multiple steps (e.g., an application that involves an `agent`).\n",
":::"

View File

@ -32,7 +32,7 @@
"\n",
"LangChain has a large collection of 3rd party tools. Please visit [Tool Integrations](/docs/integrations/tools/) for a list of the available tools.\n",
"\n",
":::{.callout-important}\n",
":::important\n",
"\n",
"When using 3rd party tools, make sure that you understand how the tool works, what permissions\n",
"it has. Read over its documentation and check if anything is required from you\n",

View File

@ -9,7 +9,7 @@
"\n",
"There are certain tools that we don't trust a model to execute on its own. One thing we can do in such situations is require human approval before the tool is invoked.\n",
"\n",
":::{.callout-info}\n",
":::info\n",
"\n",
"This how-to guide shows a simple way to add human-in-the-loop for code running in a jupyter notebook or in a terminal.\n",
"\n",

View File

@ -17,7 +17,7 @@
"source": [
"# How to add ad-hoc tool calling capability to LLMs and Chat Models\n",
"\n",
":::{.callout-caution}\n",
":::caution\n",
"\n",
"Some models have been fine-tuned for tool calling and provide a dedicated API for tool calling. Generally, such models are better at tool calling than non-fine-tuned models, and are recommended for use cases that require tool calling. Please see the [how to use a chat model to call tools](/docs/how_to/tool_calling) guide for more information.\n",
"\n",
@ -318,7 +318,7 @@
"id": "e1f08255-f146-4f4a-be43-5c21c1d3ae83",
"metadata": {},
"source": [
":::{.callout-important}\n",
":::important\n",
"\n",
"🎉 Amazing! 🎉 We now instructed our model on how to **request** that a tool be invoked.\n",
"\n",

View File

@ -17,7 +17,7 @@
"source": [
"# [Deprecated] Experimental Anthropic Tools Wrapper\n",
"\n",
":::{.callout-warning}\n",
":::warning\n",
"\n",
"The Anthropic API officially supports tool-calling so this workaround is no longer needed. Please use [ChatAnthropic](/docs/integrations/chat/anthropic) with `langchain-anthropic>=0.1.15`.\n",
"\n",

View File

@ -69,7 +69,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
":::{.callout-note}\n",
":::note\n",
"\n",
"The `max_characters` parameter for **TextContentsOptions** used to be called `max_length` which is now deprecated. Make sure to use `max_characters` instead.\n",
"\n",

View File

@ -12,7 +12,7 @@
"This interface will only return things that are printed - therefore, if you want to use it to calculate an answer, make sure to have it print out the answer.\n",
"\n",
"\n",
":::{.callout-caution}\n",
":::caution\n",
"Python REPL can execute arbitrary code on the host machine (e.g., delete files, make network requests). Use with caution.\n",
"\n",
"For more information general security guidelines, please see https://python.langchain.com/docs/security/.\n",

View File

@ -526,7 +526,7 @@
"In addition to streaming back messages, it is also useful to be streaming back tokens.\n",
"We can do this with the `.astream_events` method.\n",
"\n",
":::{.callout-important}\n",
":::important\n",
"This `.astream_events` method only works with Python 3.11 or higher.\n",
":::"
]

View File

@ -29,7 +29,7 @@
"\n",
"In this tutorial, we will build a chain to extract structured information from unstructured text. \n",
"\n",
":::{.callout-important}\n",
":::important\n",
"This tutorial will only work with models that support **tool calling**\n",
":::"
]
@ -148,7 +148,7 @@
"1. Document the **attributes** and the **schema** itself: This information is sent to the LLM and is used to improve the quality of information extraction.\n",
"2. Do not force the LLM to make up information! Above we used `Optional` for the attributes allowing the LLM to output `None` if it doesn't know the answer.\n",
"\n",
":::{.callout-important}\n",
":::important\n",
"For best performance, document the schema well and make sure the model isn't force to return results if there's no information to be extracted in the text.\n",
":::\n",
"\n",
@ -258,7 +258,7 @@
"id": "bd1c493d-f9dc-4236-8da9-50f6919f5710",
"metadata": {},
"source": [
":::{.callout-important} \n",
":::important \n",
"\n",
"Extraction is Generative 🤯\n",
"\n",
@ -325,7 +325,7 @@
"id": "5f5cda33-fd7b-481e-956a-703f45e40e1d",
"metadata": {},
"source": [
":::{.callout-important}\n",
":::important\n",
"Extraction might not be perfect here. Please continue to see how to use **Reference Examples** to improve the quality of extraction, and see the **guidelines** section!\n",
":::"
]
@ -358,7 +358,7 @@
"id": "fba1d770-bf4d-4de4-9e4f-7384872ef0dc",
"metadata": {},
"source": [
":::{.callout-tip}\n",
":::tip\n",
"When the schema accommodates the extraction of **multiple entities**, it also allows the model to extract **no entities** if no relevant information\n",
"is in the text by providing an empty list. \n",
"\n",

View File

@ -398,7 +398,7 @@
"id": "53263a65-4de2-4dd8-9291-6a8169ab6f1d",
"metadata": {},
"source": [
":::{.callout-tip}\n",
":::tip\n",
"\n",
"Check out the [LangSmith trace](https://smith.langchain.com/public/243301e4-4cc5-4e52-a6e7-8cfe9208398d/r) \n",
"\n",

View File

@ -18,7 +18,7 @@
"source": [
"# Summarize Text\n",
"\n",
":::{.callout-info}\n",
":::info\n",
"\n",
"This tutorial demonstrates text summarization using built-in chains and [LangGraph](https://langchain-ai.github.io/langgraph/).\n",
"\n",

View File

@ -18,13 +18,6 @@ class EscapePreprocessor(Preprocessor):
cell.source = re.sub(
r"```{=mdx}\n(.*?)\n```", r"\1", cell.source, flags=re.DOTALL
)
if ":::{.callout" in cell.source:
cell.source = re.sub(
r":::{.callout-([^}]*)}(.*?):::",
r":::\1\2:::",
cell.source,
flags=re.DOTALL,
)
# rewrite .ipynb links to .md
cell.source = re.sub(
r"\[([^\]]*)\]\((?![^\)]*//)([^)]*)\.ipynb\)",