diff --git a/docs/docs/expression_language/get_started.ipynb b/docs/docs/expression_language/get_started.ipynb index ec0ded495bd..a7c03adb583 100644 --- a/docs/docs/expression_language/get_started.ipynb +++ b/docs/docs/expression_language/get_started.ipynb @@ -76,15 +76,15 @@ "id": "81c502c5-85ee-4f36-aaf4-d6e350b7792f", "metadata": {}, "source": [ - "Notice this line of this code, where we piece together then different components into a single chain using LCEL:\n", + "Notice this line of the code, where we piece together these different components into a single chain using LCEL:\n", "\n", "```\n", "chain = prompt | model | output_parser\n", "```\n", "\n", - "The `|` symbol is similar to a [unix pipe operator](https://en.wikipedia.org/wiki/Pipeline_(Unix)), which chains together the different components feeds the output from one component as input into the next component. \n", + "The `|` symbol is similar to a [unix pipe operator](https://en.wikipedia.org/wiki/Pipeline_(Unix)), which chains together the different components, feeding the output from one component as input into the next component. \n", "\n", - "In this chain the user input is passed to the prompt template, then the prompt template output is passed to the model, then the model output is passed to the output parser. Let's take a look at each component individually to really understand what's going on. " + "In this chain the user input is passed to the prompt template, then the prompt template output is passed to the model, then the model output is passed to the output parser. Let's take a look at each component individually to really understand what's going on." ] }, { @@ -233,7 +233,7 @@ "### 3. Output parser\n", "\n", "And lastly we pass our `model` output to the `output_parser`, which is a `BaseOutputParser` meaning it takes either a string or a \n", - "`BaseMessage` as input. The `StrOutputParser` specifically simple converts any input into a string." + "`BaseMessage` as input. The specific `StrOutputParser` simply converts any input into a string." ] }, { @@ -293,7 +293,7 @@ "source": [ ":::info\n", "\n", - "Note that if you’re curious about the output of any components, you can always test out a smaller version of the chain such as `prompt` or `prompt | model` to see the intermediate results:\n", + "Note that if you’re curious about the output of any components, you can always test out a smaller version of the chain such as `prompt` or `prompt | model` to see the intermediate results:\n", "\n", ":::" ] @@ -321,7 +321,7 @@ "source": [ "## RAG Search Example\n", "\n", - "For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions. " + "For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions." ] }, { @@ -450,7 +450,7 @@ "With the flow being:\n", "\n", "1. The first steps create a `RunnableParallel` object with two entries. The first entry, `context` will include the document results fetched by the retriever. The second entry, `question` will contain the user’s original question. To pass on the question, we use `RunnablePassthrough` to copy this entry. \n", - "2. Feed the dictionary from the step above to the `prompt` component. It then takes the user input which is `question` as well as the retrieved document which is `context` to construct a prompt and output a PromptValue. \n", + "2. Feed the dictionary from the step above to the `prompt` component. It then takes the user input which is `question` as well as the retrieved document which is `context` to construct a prompt and output a PromptValue. \n", "3. The `model` component takes the generated prompt, and passes into the OpenAI LLM model for evaluation. The generated output from the model is a `ChatMessage` object. \n", "4. Finally, the `output_parser` component takes in a `ChatMessage`, and transforms this into a Python string, which is returned from the invoke method.\n", "\n",