langchain/docs/docs/how_to/inspect.ipynb
Eugene Yurtsev b2f58d37db
docs: run migration script against how-to docs (#21927)
Upgrade imports in how-to docs
2024-05-20 17:32:59 +00:00

230 lines
7.5 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "8c5eb99a",
"metadata": {},
"source": [
"# How to inspect runnables\n",
"\n",
":::info Prerequisites\n",
"\n",
"This guide assumes familiarity with the following concepts:\n",
"- [LangChain Expression Language (LCEL)](/docs/concepts/#langchain-expression-language)\n",
"- [Chaining runnables](/docs/how_to/sequence/)\n",
"\n",
":::\n",
"\n",
"Once you create a runnable with [LangChain Expression Language](/docs/concepts/#langchain-expression-language), you may often want to inspect it to get a better sense for what is going on. This notebook covers some methods for doing so.\n",
"\n",
"This guide shows some ways you can programmatically introspect the internal steps of chains. If you are instead interested in debugging issues in your chain, see [this section](/docs/how_to/debugging) instead.\n",
"\n",
"First, let's create an example chain. We will create one that does retrieval:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d816e954",
"metadata": {},
"outputs": [],
"source": [
"%pip install -qU langchain langchain-openai faiss-cpu tiktoken"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "139228c2",
"metadata": {},
"outputs": [],
"source": [
"from langchain_community.vectorstores import FAISS\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"from langchain_core.prompts import ChatPromptTemplate\n",
"from langchain_core.runnables import RunnablePassthrough\n",
"from langchain_openai import ChatOpenAI, OpenAIEmbeddings\n",
"\n",
"vectorstore = FAISS.from_texts(\n",
" [\"harrison worked at kensho\"], embedding=OpenAIEmbeddings()\n",
")\n",
"retriever = vectorstore.as_retriever()\n",
"\n",
"template = \"\"\"Answer the question based only on the following context:\n",
"{context}\n",
"\n",
"Question: {question}\n",
"\"\"\"\n",
"prompt = ChatPromptTemplate.from_template(template)\n",
"\n",
"model = ChatOpenAI()\n",
"\n",
"chain = (\n",
" {\"context\": retriever, \"question\": RunnablePassthrough()}\n",
" | prompt\n",
" | model\n",
" | StrOutputParser()\n",
")"
]
},
{
"cell_type": "markdown",
"id": "849e3c42",
"metadata": {},
"source": [
"## Get a graph\n",
"\n",
"You can use the `get_graph()` method to get a graph representation of the runnable:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2448b6c2",
"metadata": {},
"outputs": [],
"source": [
"chain.get_graph()"
]
},
{
"cell_type": "markdown",
"id": "065b02fb",
"metadata": {},
"source": [
"## Print a graph\n",
"\n",
"While that is not super legible, you can use the `print_ascii()` method to show that graph in a way that's easier to understand:"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "d5ab1515",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" +---------------------------------+ \n",
" | Parallel<context,question>Input | \n",
" +---------------------------------+ \n",
" ** ** \n",
" *** *** \n",
" ** ** \n",
"+----------------------+ +-------------+ \n",
"| VectorStoreRetriever | | Passthrough | \n",
"+----------------------+ +-------------+ \n",
" ** ** \n",
" *** *** \n",
" ** ** \n",
" +----------------------------------+ \n",
" | Parallel<context,question>Output | \n",
" +----------------------------------+ \n",
" * \n",
" * \n",
" * \n",
" +--------------------+ \n",
" | ChatPromptTemplate | \n",
" +--------------------+ \n",
" * \n",
" * \n",
" * \n",
" +------------+ \n",
" | ChatOpenAI | \n",
" +------------+ \n",
" * \n",
" * \n",
" * \n",
" +-----------------+ \n",
" | StrOutputParser | \n",
" +-----------------+ \n",
" * \n",
" * \n",
" * \n",
" +-----------------------+ \n",
" | StrOutputParserOutput | \n",
" +-----------------------+ \n"
]
}
],
"source": [
"chain.get_graph().print_ascii()"
]
},
{
"cell_type": "markdown",
"id": "2babf851",
"metadata": {},
"source": [
"## Get the prompts\n",
"\n",
"You may want to see just the prompts that are used in a chain with the `get_prompts()` method:"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "34b2118d",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[ChatPromptTemplate(input_variables=['context', 'question'], messages=[HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['context', 'question'], template='Answer the question based only on the following context:\\n{context}\\n\\nQuestion: {question}\\n'))])]"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain.get_prompts()"
]
},
{
"cell_type": "markdown",
"id": "c5a74bd5",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"You've now learned how to introspect your composed LCEL chains.\n",
"\n",
"Next, check out the other how-to guides on runnables in this section, or the related how-to guide on [debugging your chains](/docs/how_to/debugging)."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ed965769",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.1"
}
},
"nbformat": 4,
"nbformat_minor": 5
}