diff --git a/docs/docs_skeleton/docs/modules/model_io/models/chat/chat_model_caching.mdx b/docs/docs_skeleton/docs/modules/model_io/models/chat/chat_model_caching.mdx index c34cb22326d..f38a1d8bab6 100644 --- a/docs/docs_skeleton/docs/modules/model_io/models/chat/chat_model_caching.mdx +++ b/docs/docs_skeleton/docs/modules/model_io/models/chat/chat_model_caching.mdx @@ -1,5 +1,5 @@ # Caching -LangChain provides an optional caching layer for Chat Models. This is useful for two reasons: +LangChain provides an optional caching layer for chat models. This is useful for two reasons: It can save you money by reducing the number of API calls you make to the LLM provider, if you're often requesting the same completion multiple times. It can speed up your application by reducing the number of API calls you make to the LLM provider. diff --git a/docs/docs_skeleton/docs/modules/model_io/models/chat/index.mdx b/docs/docs_skeleton/docs/modules/model_io/models/chat/index.mdx index 742b06a5320..ddef889977a 100644 --- a/docs/docs_skeleton/docs/modules/model_io/models/chat/index.mdx +++ b/docs/docs_skeleton/docs/modules/model_io/models/chat/index.mdx @@ -8,8 +8,8 @@ Head to [Integrations](/docs/integrations/chat/) for documentation on built-in i ::: Chat models are a variation on language models. -While chat models use language models under the hood, the interface they expose is a bit different. -Rather than expose a "text in, text out" API, they expose an interface where "chat messages" are the inputs and outputs. +While chat models use language models under the hood, the interface they use is a bit different. +Rather than using a "text in, text out" API, they use an interface where "chat messages" are the inputs and outputs. Chat model APIs are fairly new, so we are still figuring out the correct abstractions. diff --git a/docs/docs_skeleton/docs/modules/model_io/models/chat/prompts.mdx b/docs/docs_skeleton/docs/modules/model_io/models/chat/prompts.mdx index b85eb8a8cea..4d5c46d3d5b 100644 --- a/docs/docs_skeleton/docs/modules/model_io/models/chat/prompts.mdx +++ b/docs/docs_skeleton/docs/modules/model_io/models/chat/prompts.mdx @@ -1,6 +1,6 @@ # Prompts -Prompts for Chat models are built around messages, instead of just plain text. +Prompts for chat models are built around messages, instead of just plain text. import Prompts from "@snippets/modules/model_io/models/chat/how_to/prompts.mdx" diff --git a/docs/docs_skeleton/docs/modules/model_io/models/chat/streaming.mdx b/docs/docs_skeleton/docs/modules/model_io/models/chat/streaming.mdx index b4d74b8038d..96d4e7c2d80 100644 --- a/docs/docs_skeleton/docs/modules/model_io/models/chat/streaming.mdx +++ b/docs/docs_skeleton/docs/modules/model_io/models/chat/streaming.mdx @@ -1,6 +1,6 @@ # Streaming -Some Chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated. +Some chat models provide a streaming response. This means that instead of waiting for the entire response to be returned, you can start processing it as soon as it's available. This is useful if you want to display the response to the user as it's being generated, or if you want to process the response as it's being generated. import StreamingChatModel from "@snippets/modules/model_io/models/chat/how_to/streaming.mdx" diff --git a/docs/docs_skeleton/docs/modules/model_io/models/index.mdx b/docs/docs_skeleton/docs/modules/model_io/models/index.mdx index 0a97352ac51..287f4f552b8 100644 --- a/docs/docs_skeleton/docs/modules/model_io/models/index.mdx +++ b/docs/docs_skeleton/docs/modules/model_io/models/index.mdx @@ -8,16 +8,16 @@ LangChain provides interfaces and integrations for two types of models: - [LLMs](/docs/modules/model_io/models/llms/): Models that take a text string as input and return a text string - [Chat models](/docs/modules/model_io/models/chat/): Models that are backed by a language model but take a list of Chat Messages as input and return a Chat Message -## LLMs vs Chat Models +## LLMs vs chat models -LLMs and Chat Models are subtly but importantly different. LLMs in LangChain refer to pure text completion models. +LLMs and chat models are subtly but importantly different. LLMs in LangChain refer to pure text completion models. The APIs they wrap take a string prompt as input and output a string completion. OpenAI's GPT-3 is implemented as an LLM. Chat models are often backed by LLMs but tuned specifically for having conversations. -And, crucially, their provider APIs expose a different interface than pure text completion models. Instead of a single string, +And, crucially, their provider APIs use a different interface than pure text completion models. Instead of a single string, they take a list of chat messages as input. Usually these messages are labeled with the speaker (usually one of "System", -"AI", and "Human"). And they return a ("AI") chat message as output. GPT-4 and Anthropic's Claude are both implemented as Chat Models. +"AI", and "Human"). And they return an AI chat message as output. GPT-4 and Anthropic's Claude are both implemented as chat models. -To make it possible to swap LLMs and Chat Models, both implement the Base Language Model interface. This exposes common +To make it possible to swap LLMs and chat models, both implement the Base Language Model interface. This includes common methods "predict", which takes a string and returns a string, and "predict messages", which takes messages and returns a message. -If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for Chat Models), +If you are using a specific model it's recommended you use the methods specific to that model class (i.e., "predict" for LLMs and "predict messages" for chat models), but if you're creating an application that should work with different types of models the shared interface can be helpful. diff --git a/docs/extras/_templates/integration.mdx b/docs/extras/_templates/integration.mdx index 0263992039f..5e4b994ebfd 100644 --- a/docs/extras/_templates/integration.mdx +++ b/docs/extras/_templates/integration.mdx @@ -47,7 +47,7 @@ from langchain.embeddings import integration_class_REPLACE_ME ``` -## Chat Models +## Chat models See a [usage example](/docs/integrations/chat/INCLUDE_REAL_NAME) diff --git a/docs/extras/integrations/callbacks/context.ipynb b/docs/extras/integrations/callbacks/context.ipynb index 50e422562e5..bf05268f6b7 100644 --- a/docs/extras/integrations/callbacks/context.ipynb +++ b/docs/extras/integrations/callbacks/context.ipynb @@ -93,7 +93,7 @@ "metadata": {}, "source": [ "## Usage\n", - "### Using the Context callback within a Chat Model\n", + "### Using the Context callback within a chat model\n", "\n", "The Context callback handler can be used to directly record transcripts between users and AI assistants.\n", "\n", diff --git a/docs/extras/integrations/providers/arangodb.mdx b/docs/extras/integrations/providers/arangodb.mdx index dcf0378a122..624ae82b2a5 100644 --- a/docs/extras/integrations/providers/arangodb.mdx +++ b/docs/extras/integrations/providers/arangodb.mdx @@ -11,7 +11,7 @@ pip install python-arango ## Graph QA Chain -Connect your ArangoDB Database with a Chat Model to get insights on your data. +Connect your ArangoDB Database with a chat model to get insights on your data. See the notebook example [here](/docs/use_cases/more/graph/graph_arangodb_qa.html). diff --git a/docs/extras/integrations/providers/datadog.mdx b/docs/extras/integrations/providers/datadog.mdx index aee4d5e24b2..fd25e3d47cd 100644 --- a/docs/extras/integrations/providers/datadog.mdx +++ b/docs/extras/integrations/providers/datadog.mdx @@ -4,12 +4,12 @@ Key features of the ddtrace integration for LangChain: - Traces: Capture LangChain requests, parameters, prompt-completions, and help visualize LangChain operations. -- Metrics: Capture LangChain request latency, errors, and token/cost usage (for OpenAI LLMs and Chat Models). +- Metrics: Capture LangChain request latency, errors, and token/cost usage (for OpenAI LLMs and chat models). - Logs: Store prompt completion data for each LangChain operation. - Dashboard: Combine metrics, logs, and trace data into a single plane to monitor LangChain requests. - Monitors: Provide alerts in response to spikes in LangChain request latency or error rate. -Note: The ddtrace LangChain integration currently provides tracing for LLMs, Chat Models, Text Embedding Models, Chains, and Vectorstores. +Note: The ddtrace LangChain integration currently provides tracing for LLMs, chat models, Text Embedding Models, Chains, and Vectorstores. ## Installation and Setup diff --git a/docs/extras/modules/model_io/models/chat/human_input_chat_model.ipynb b/docs/extras/modules/model_io/models/chat/human_input_chat_model.ipynb index 3b5ce277138..677d45af8cb 100644 --- a/docs/extras/modules/model_io/models/chat/human_input_chat_model.ipynb +++ b/docs/extras/modules/model_io/models/chat/human_input_chat_model.ipynb @@ -5,9 +5,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Human input Chat Model\n", + "# Human input chat model\n", "\n", - "Along with HumanInputLLM, LangChain also provides a pseudo Chat Model class that can be used for testing, debugging, or educational purposes. This allows you to mock out calls to the Chat Model and simulate how a human would respond if they received the messages.\n", + "Along with HumanInputLLM, LangChain also provides a pseudo chat model class that can be used for testing, debugging, or educational purposes. This allows you to mock out calls to the chat model and simulate how a human would respond if they received the messages.\n", "\n", "In this notebook, we go over how to use this.\n", "\n", diff --git a/docs/extras/modules/model_io/models/llms/custom_llm.ipynb b/docs/extras/modules/model_io/models/llms/custom_llm.ipynb index 3ff99dc80dd..3fa76326fb8 100644 --- a/docs/extras/modules/model_io/models/llms/custom_llm.ipynb +++ b/docs/extras/modules/model_io/models/llms/custom_llm.ipynb @@ -11,13 +11,13 @@ "\n", "There is only one required thing that a custom LLM needs to implement:\n", "\n", - "1. A `_call` method that takes in a string, some optional stop words, and returns a string\n", + "- A `_call` method that takes in a string, some optional stop words, and returns a string\n", "\n", "There is a second optional thing it can implement:\n", "\n", - "1. An `_identifying_params` property that is used to help with printing of this class. Should return a dictionary.\n", + "- An `_identifying_params` property that is used to help with printing of this class. Should return a dictionary.\n", "\n", - "Let's implement a very simple custom LLM that just returns the first N characters of the input." + "Let's implement a very simple custom LLM that just returns the first n characters of the input." ] }, { diff --git a/docs/extras/modules/model_io/models/llms/fake_llm.ipynb b/docs/extras/modules/model_io/models/llms/fake_llm.ipynb index 99bc1d84809..61e5fc3b4ac 100644 --- a/docs/extras/modules/model_io/models/llms/fake_llm.ipynb +++ b/docs/extras/modules/model_io/models/llms/fake_llm.ipynb @@ -6,7 +6,7 @@ "metadata": {}, "source": [ "# Fake LLM\n", - "We expose a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.\n", + "LangChain provides a fake LLM class that can be used for testing. This allows you to mock out calls to the LLM and simulate what would happen if the LLM responded in a certain way.\n", "\n", "In this notebook we go over how to use this.\n", "\n", diff --git a/docs/extras/modules/model_io/output_parsers/datetime.ipynb b/docs/extras/modules/model_io/output_parsers/datetime.ipynb index 1ec0e1eb6d9..187cc473831 100644 --- a/docs/extras/modules/model_io/output_parsers/datetime.ipynb +++ b/docs/extras/modules/model_io/output_parsers/datetime.ipynb @@ -7,7 +7,7 @@ "source": [ "# Datetime parser\n", "\n", - "This OutputParser shows out to parse LLM output into datetime format." + "This OutputParser can be used to parse LLM output into datetime format." ] }, { diff --git a/docs/extras/modules/model_io/output_parsers/enum.ipynb b/docs/extras/modules/model_io/output_parsers/enum.ipynb index 7d1285243ee..02dd890623a 100644 --- a/docs/extras/modules/model_io/output_parsers/enum.ipynb +++ b/docs/extras/modules/model_io/output_parsers/enum.ipynb @@ -7,7 +7,7 @@ "source": [ "# Enum parser\n", "\n", - "This notebook shows how to use an Enum output parser" + "This notebook shows how to use an Enum output parser." ] }, { diff --git a/docs/extras/modules/model_io/output_parsers/pydantic.ipynb b/docs/extras/modules/model_io/output_parsers/pydantic.ipynb index 05a5bece677..14137fc2d66 100644 --- a/docs/extras/modules/model_io/output_parsers/pydantic.ipynb +++ b/docs/extras/modules/model_io/output_parsers/pydantic.ipynb @@ -10,7 +10,7 @@ "\n", "Keep in mind that large language models are leaky abstractions! You'll have to use an LLM with sufficient capacity to generate well-formed JSON. In the OpenAI family, DaVinci can do reliably but Curie's ability already drops off dramatically. \n", "\n", - "Use Pydantic to declare your data model. Pydantic's BaseModel like a Python dataclass, but with actual type checking + coercion." + "Use Pydantic to declare your data model. Pydantic's BaseModel is like a Python dataclass, but with actual type checking + coercion." ] }, { diff --git a/docs/extras/modules/model_io/output_parsers/retry.ipynb b/docs/extras/modules/model_io/output_parsers/retry.ipynb index 4d5a9218d62..383b3eb0691 100644 --- a/docs/extras/modules/model_io/output_parsers/retry.ipynb +++ b/docs/extras/modules/model_io/output_parsers/retry.ipynb @@ -7,7 +7,7 @@ "source": [ "# Retry parser\n", "\n", - "While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it can't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example." + "While in some cases it is possible to fix any parsing mistakes by only looking at the output, in other cases it isn't. An example of this is when the output is not just in the incorrect format, but is partially complete. Consider the below example." ] }, { @@ -93,7 +93,7 @@ "id": "25631465", "metadata": {}, "source": [ - "If we try to parse this response as is, we will get an error" + "If we try to parse this response as is, we will get an error:" ] }, { diff --git a/docs/extras/modules/model_io/prompts/example_selectors/custom_example_selector.md b/docs/extras/modules/model_io/prompts/example_selectors/custom_example_selector.md index d9bff155979..e4ada5c03c5 100644 --- a/docs/extras/modules/model_io/prompts/example_selectors/custom_example_selector.md +++ b/docs/extras/modules/model_io/prompts/example_selectors/custom_example_selector.md @@ -1,6 +1,6 @@ # Custom example selector -In this tutorial, we'll create a custom example selector that selects every alternate example from a given list of examples. +In this tutorial, we'll create a custom example selector that selects examples randomly from a given list of examples. An `ExampleSelector` must implement two methods: @@ -9,9 +9,8 @@ An `ExampleSelector` must implement two methods: Let's implement a custom `ExampleSelector` that just selects two examples at random. -:::{note} +**Note:** Take a look at the current set of example selector implementations supported in LangChain [here](/docs/modules/model_io/prompts/example_selectors/). -::: @@ -52,7 +51,6 @@ examples = [ # Initialize example selector. example_selector = CustomExampleSelector(examples) - # Select examples example_selector.select_examples({"foo": "foo"}) # -> array([{'foo': '2'}, {'foo': '3'}], dtype=object) diff --git a/docs/extras/modules/model_io/prompts/example_selectors/mmr.ipynb b/docs/extras/modules/model_io/prompts/example_selectors/mmr.ipynb index 137d884f3e3..b3f01b65131 100644 --- a/docs/extras/modules/model_io/prompts/example_selectors/mmr.ipynb +++ b/docs/extras/modules/model_io/prompts/example_selectors/mmr.ipynb @@ -30,7 +30,7 @@ " template=\"Input: {input}\\nOutput: {output}\",\n", ")\n", "\n", - "# These are a lot of examples of a pretend task of creating antonyms.\n", + "# Examples of a pretend task of creating antonyms.\n", "examples = [\n", " {\"input\": \"happy\", \"output\": \"sad\"},\n", " {\"input\": \"tall\", \"output\": \"short\"},\n", @@ -48,13 +48,13 @@ "outputs": [], "source": [ "example_selector = MaxMarginalRelevanceExampleSelector.from_examples(\n", - " # This is the list of examples available to select from.\n", + " # The list of examples available to select from.\n", " examples,\n", - " # This is the embedding class used to produce embeddings which are used to measure semantic similarity.\n", + " # The embedding class used to produce embeddings which are used to measure semantic similarity.\n", " OpenAIEmbeddings(),\n", - " # This is the VectorStore class that is used to store the embeddings and do a similarity search over.\n", + " # The VectorStore class that is used to store the embeddings and do a similarity search over.\n", " FAISS,\n", - " # This is the number of examples to produce.\n", + " # The number of examples to produce.\n", " k=2,\n", ")\n", "mmr_prompt = FewShotPromptTemplate(\n", @@ -122,13 +122,13 @@ "# Let's compare this to what we would just get if we went solely off of similarity,\n", "# by using SemanticSimilarityExampleSelector instead of MaxMarginalRelevanceExampleSelector.\n", "example_selector = SemanticSimilarityExampleSelector.from_examples(\n", - " # This is the list of examples available to select from.\n", + " # The list of examples available to select from.\n", " examples,\n", - " # This is the embedding class used to produce embeddings which are used to measure semantic similarity.\n", + " # The embedding class used to produce embeddings which are used to measure semantic similarity.\n", " OpenAIEmbeddings(),\n", - " # This is the VectorStore class that is used to store the embeddings and do a similarity search over.\n", + " # The VectorStore class that is used to store the embeddings and do a similarity search over.\n", " FAISS,\n", - " # This is the number of examples to produce.\n", + " # The number of examples to produce.\n", " k=2,\n", ")\n", "similar_prompt = FewShotPromptTemplate(\n", diff --git a/docs/extras/modules/model_io/prompts/example_selectors/ngram_overlap.ipynb b/docs/extras/modules/model_io/prompts/example_selectors/ngram_overlap.ipynb index 4eef0536905..9876603a307 100644 --- a/docs/extras/modules/model_io/prompts/example_selectors/ngram_overlap.ipynb +++ b/docs/extras/modules/model_io/prompts/example_selectors/ngram_overlap.ipynb @@ -28,7 +28,7 @@ " template=\"Input: {input}\\nOutput: {output}\",\n", ")\n", "\n", - "# These are a lot of examples of a pretend task of creating antonyms.\n", + "# Examples of a pretend task of creating antonyms.\n", "examples = [\n", " {\"input\": \"happy\", \"output\": \"sad\"},\n", " {\"input\": \"tall\", \"output\": \"short\"},\n", @@ -45,7 +45,7 @@ "metadata": {}, "outputs": [], "source": [ - "# These are examples of a fictional translation task.\n", + "# Examples of a fictional translation task.\n", "examples = [\n", " {\"input\": \"See Spot run.\", \"output\": \"Ver correr a Spot.\"},\n", " {\"input\": \"My dog barks.\", \"output\": \"Mi perro ladra.\"},\n", @@ -65,11 +65,11 @@ " template=\"Input: {input}\\nOutput: {output}\",\n", ")\n", "example_selector = NGramOverlapExampleSelector(\n", - " # These are the examples it has available to choose from.\n", + " # The examples it has available to choose from.\n", " examples=examples,\n", - " # This is the PromptTemplate being used to format the examples.\n", + " # The PromptTemplate being used to format the examples.\n", " example_prompt=example_prompt,\n", - " # This is the threshold, at which selector stops.\n", + " # The threshold, at which selector stops.\n", " # It is set to -1.0 by default.\n", " threshold=-1.0,\n", " # For negative threshold:\n", diff --git a/docs/extras/modules/model_io/prompts/prompt_templates/validate.mdx b/docs/extras/modules/model_io/prompts/prompt_templates/validate.mdx index e68dbd2e4b4..9a36ddaddd8 100644 --- a/docs/extras/modules/model_io/prompts/prompt_templates/validate.mdx +++ b/docs/extras/modules/model_io/prompts/prompt_templates/validate.mdx @@ -1,6 +1,6 @@ # Validate template -By default, `PromptTemplate` will validate the `template` string by checking whether the `input_variables` match the variables defined in `template`. You can disable this behavior by setting `validate_template` to `False` +By default, `PromptTemplate` will validate the `template` string by checking whether the `input_variables` match the variables defined in `template`. You can disable this behavior by setting `validate_template` to `False`. ```python template = "I am learning langchain because {reason}." diff --git a/docs/snippets/modules/model_io/models/chat/get_started.mdx b/docs/snippets/modules/model_io/models/chat/get_started.mdx index 127283bb2e7..452738d8304 100644 --- a/docs/snippets/modules/model_io/models/chat/get_started.mdx +++ b/docs/snippets/modules/model_io/models/chat/get_started.mdx @@ -19,7 +19,7 @@ from langchain.chat_models import ChatOpenAI chat = ChatOpenAI(openai_api_key="...") ``` -otherwise you can initialize without any params: +Otherwise you can initialize without any params: ```python from langchain.chat_models import ChatOpenAI @@ -101,7 +101,7 @@ result -You can recover things like token usage from this LLMResult +You can recover things like token usage from this LLMResult: ```python diff --git a/docs/snippets/modules/model_io/models/chat/how_to/prompts.mdx b/docs/snippets/modules/model_io/models/chat/how_to/prompts.mdx index a02c7b4e246..da0df2dbc9a 100644 --- a/docs/snippets/modules/model_io/models/chat/how_to/prompts.mdx +++ b/docs/snippets/modules/model_io/models/chat/how_to/prompts.mdx @@ -1,6 +1,6 @@ You can make use of templating by using a `MessagePromptTemplate`. You can build a `ChatPromptTemplate` from one or more `MessagePromptTemplates`. You can use `ChatPromptTemplate`'s `format_prompt` -- this returns a `PromptValue`, which you can convert to a string or Message object, depending on whether you want to use the formatted value as input to an llm or chat model. -For convenience, there is a `from_template` method exposed on the template. If you were to use this template, this is what it would look like: +For convenience, there is a `from_template` method defined on the template. If you were to use this template, this is what it would look like: ```python diff --git a/docs/snippets/modules/model_io/models/llms/get_started.mdx b/docs/snippets/modules/model_io/models/llms/get_started.mdx index 54d6a96b930..76c79589f02 100644 --- a/docs/snippets/modules/model_io/models/llms/get_started.mdx +++ b/docs/snippets/modules/model_io/models/llms/get_started.mdx @@ -90,7 +90,7 @@ llm_result.generations[-1] -You can also access provider specific information that is returned. This information is NOT standardized across providers. +You can also access provider specific information that is returned. This information is **not** standardized across providers. ```python diff --git a/docs/snippets/modules/model_io/models/llms/how_to/llm_caching.mdx b/docs/snippets/modules/model_io/models/llms/how_to/llm_caching.mdx index 5bb436ff82d..2bf7b2a4a0d 100644 --- a/docs/snippets/modules/model_io/models/llms/how_to/llm_caching.mdx +++ b/docs/snippets/modules/model_io/models/llms/how_to/llm_caching.mdx @@ -97,8 +97,8 @@ llm.predict("Tell me a joke") -## Optional Caching in Chains -You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, its often easier to construct the chain first, and then edit the LLM afterwards. +## Optional caching in chains +You can also turn off caching for particular nodes in chains. Note that because of certain interfaces, it's often easier to construct the chain first, and then edit the LLM afterwards. As an example, we will load a summarizer map-reduce chain. We will cache results for the map-step, but then not freeze it for the combine step. diff --git a/docs/snippets/modules/model_io/prompts/example_selectors/get_started.mdx b/docs/snippets/modules/model_io/prompts/example_selectors/get_started.mdx index 0444462e1a1..7020fa45003 100644 --- a/docs/snippets/modules/model_io/prompts/example_selectors/get_started.mdx +++ b/docs/snippets/modules/model_io/prompts/example_selectors/get_started.mdx @@ -7,4 +7,4 @@ class BaseExampleSelector(ABC): """Select which examples to use based on the inputs.""" ``` -The only method it needs to expose is a ``select_examples`` method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected. Let's take a look at some below. +The only method it needs to define is a ``select_examples`` method. This takes in the input variables and then returns a list of examples. It is up to each specific implementation as to how those examples are selected. diff --git a/docs/snippets/modules/model_io/prompts/example_selectors/length_based.mdx b/docs/snippets/modules/model_io/prompts/example_selectors/length_based.mdx index 9c0e70bdd7d..8c76ccbf262 100644 --- a/docs/snippets/modules/model_io/prompts/example_selectors/length_based.mdx +++ b/docs/snippets/modules/model_io/prompts/example_selectors/length_based.mdx @@ -4,7 +4,7 @@ from langchain.prompts import FewShotPromptTemplate from langchain.prompts.example_selector import LengthBasedExampleSelector -# These are a lot of examples of a pretend task of creating antonyms. +# Examples of a pretend task of creating antonyms. examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, @@ -17,14 +17,14 @@ example_prompt = PromptTemplate( template="Input: {input}\nOutput: {output}", ) example_selector = LengthBasedExampleSelector( - # These are the examples it has available to choose from. + # The examples it has available to choose from. examples=examples, - # This is the PromptTemplate being used to format the examples. + # The PromptTemplate being used to format the examples. example_prompt=example_prompt, - # This is the maximum length that the formatted examples should be. + # The maximum length that the formatted examples should be. # Length is measured by the get_text_length function below. max_length=25, - # This is the function used to get the length of a string, which is used + # The function used to get the length of a string, which is used # to determine which examples to include. It is commented out because # it is provided as a default value if none is specified. # get_text_length: Callable[[str], int] = lambda x: len(re.split("\n| ", x)) diff --git a/docs/snippets/modules/model_io/prompts/example_selectors/similarity.mdx b/docs/snippets/modules/model_io/prompts/example_selectors/similarity.mdx index f13916be74b..87384d54f1f 100644 --- a/docs/snippets/modules/model_io/prompts/example_selectors/similarity.mdx +++ b/docs/snippets/modules/model_io/prompts/example_selectors/similarity.mdx @@ -9,7 +9,7 @@ example_prompt = PromptTemplate( template="Input: {input}\nOutput: {output}", ) -# These are a lot of examples of a pretend task of creating antonyms. +# Examples of a pretend task of creating antonyms. examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, @@ -22,13 +22,13 @@ examples = [ ```python example_selector = SemanticSimilarityExampleSelector.from_examples( - # This is the list of examples available to select from. + # The list of examples available to select from. examples, - # This is the embedding class used to produce embeddings which are used to measure semantic similarity. + # The embedding class used to produce embeddings which are used to measure semantic similarity. OpenAIEmbeddings(), - # This is the VectorStore class that is used to store the embeddings and do a similarity search over. + # The VectorStore class that is used to store the embeddings and do a similarity search over. Chroma, - # This is the number of examples to produce. + # The number of examples to produce. k=1 ) similar_prompt = FewShotPromptTemplate( diff --git a/docs/snippets/modules/model_io/prompts/prompt_templates/get_started.mdx b/docs/snippets/modules/model_io/prompts/prompt_templates/get_started.mdx index dddaf86f41e..25d62563ced 100644 --- a/docs/snippets/modules/model_io/prompts/prompt_templates/get_started.mdx +++ b/docs/snippets/modules/model_io/prompts/prompt_templates/get_started.mdx @@ -55,7 +55,7 @@ For more information, see [Custom Prompt Templates](./custom_prompt_template.htm ## Chat prompt template -The prompt to [Chat Models](../models/chat) is a list of chat messages. +The prompt to [chat models](../models/chat) is a list of chat messages. Each chat message is associated with content, and an additional parameter called `role`. For example, in the OpenAI [Chat Completions API](https://platform.openai.com/docs/guides/chat/introduction), a chat message can be associated with an AI assistant, a human or a system role.