diff --git a/docs/docs/expression_language/how_to/functions.ipynb b/docs/docs/expression_language/how_to/functions.ipynb index 0d092a19db8..ceeb46102bc 100644 --- a/docs/docs/expression_language/how_to/functions.ipynb +++ b/docs/docs/expression_language/how_to/functions.ipynb @@ -1,7 +1,8 @@ { "cells": [ { - "cell_type": "markdown", + "cell_type": "raw", + "id": "ce0e08fd", "metadata": {}, "source": [ "---\n", diff --git a/docs/docs/expression_language/how_to/message_history.ipynb b/docs/docs/expression_language/how_to/message_history.ipynb index 96999ffd000..d16ead46776 100644 --- a/docs/docs/expression_language/how_to/message_history.ipynb +++ b/docs/docs/expression_language/how_to/message_history.ipynb @@ -10,11 +10,13 @@ "The `RunnableWithMessageHistory` let's us add message history to certain types of chains.\n", "\n", "Specifically, it can be used for any Runnable that takes as input one of\n", + "\n", "* a sequence of `BaseMessage`\n", "* a dict with a key that takes a sequence of `BaseMessage`\n", "* a dict with a key that takes the latest message(s) as a string or sequence of `BaseMessage`, and a separate key that takes historical messages\n", "\n", "And returns as output one of\n", + "\n", "* a string that can be treated as the contents of an `AIMessage`\n", "* a sequence of `BaseMessage`\n", "* a dict with a key that contains a sequence of `BaseMessage`\n", diff --git a/docs/docs/guides/evaluation/comparison/pairwise_string.ipynb b/docs/docs/guides/evaluation/comparison/pairwise_string.ipynb index 938cdd11457..cc197bf5e61 100644 --- a/docs/docs/guides/evaluation/comparison/pairwise_string.ipynb +++ b/docs/docs/guides/evaluation/comparison/pairwise_string.ipynb @@ -89,6 +89,7 @@ "- reference (str) – (Only for the labeled_pairwise_string variant) The reference response.\n", "\n", "They return a dictionary with the following values:\n", + "\n", "- value: 'A' or 'B', indicating whether `prediction` or `prediction_b` is preferred, respectively\n", "- score: Integer 0 or 1 mapped from the 'value', where a score of 1 would mean that the first `prediction` is preferred, and a score of 0 would mean `prediction_b` is preferred.\n", "- reasoning: String \"chain of thought reasoning\" from the LLM generated prior to creating the score" @@ -159,6 +160,7 @@ "## Defining the Criteria\n", "\n", "By default, the LLM is instructed to select the 'preferred' response based on helpfulness, relevance, correctness, and depth of thought. You can customize the criteria by passing in a `criteria` argument, where the criteria could take any of the following forms:\n", + "\n", "- [`Criteria`](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.Criteria.html#langchain.evaluation.criteria.eval_chain.Criteria) enum or its string value - to use one of the default criteria and their descriptions\n", "- [Constitutional principal](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html#langchain.chains.constitutional_ai.models.ConstitutionalPrinciple) - use one any of the constitutional principles defined in langchain\n", "- Dictionary: a list of custom criteria, where the key is the name of the criteria, and the value is the description.\n", diff --git a/docs/docs/guides/local_llms.ipynb b/docs/docs/guides/local_llms.ipynb index 06e087b4f95..c13c24aa71e 100644 --- a/docs/docs/guides/local_llms.ipynb +++ b/docs/docs/guides/local_llms.ipynb @@ -249,14 +249,17 @@ "* Meaning: Only one layer of the model will be loaded into GPU memory (1 is often sufficient).\n", "\n", "`n_batch`: number of tokens the model should process in parallel \n", + "\n", "* Value: n_batch\n", "* Meaning: It's recommended to choose a value between 1 and n_ctx (which in this case is set to 2048)\n", "\n", - "`n_ctx`: Token context window .\n", + "`n_ctx`: Token context window\n", + "\n", "* Value: 2048\n", "* Meaning: The model will consider a window of 2048 tokens at a time\n", "\n", "`f16_kv`: whether the model should use half-precision for the key/value cache\n", + "\n", "* Value: True\n", "* Meaning: The model will use half-precision, which can be more memory efficient; Metal only supports True." ] diff --git a/docs/docs/integrations/callbacks/sagemaker_tracking.ipynb b/docs/docs/integrations/callbacks/sagemaker_tracking.ipynb index 215e4eee840..070b1d7cabf 100644 --- a/docs/docs/integrations/callbacks/sagemaker_tracking.ipynb +++ b/docs/docs/integrations/callbacks/sagemaker_tracking.ipynb @@ -12,6 +12,7 @@ ">[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability of `Amazon SageMaker` that lets you organize, track, compare and evaluate ML experiments and model versions.\n", "\n", "This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into `SageMaker Experiments`. Here, we use different scenarios to showcase the capability:\n", + "\n", "* **Scenario 1**: *Single LLM* - A case where a single LLM model is used to generate output based on a given prompt.\n", "* **Scenario 2**: *Sequential Chain* - A case where a sequential chain of two LLM models is used.\n", "* **Scenario 3**: *Agent with Tools (Chain of Thought)* - A case where multiple tools (search and math) are used in addition to an LLM.\n", @@ -50,6 +51,7 @@ }, "source": [ "First, setup the required API keys\n", + "\n", "* OpenAI: https://platform.openai.com/account/api-keys (For OpenAI LLM model)\n", "* Google SERP API: https://serpapi.com/manage-api-key (For Google Search Tool)" ] diff --git a/docs/docs/integrations/chat/ollama.ipynb b/docs/docs/integrations/chat/ollama.ipynb index 9c7db896d27..911f1f30f07 100644 --- a/docs/docs/integrations/chat/ollama.ipynb +++ b/docs/docs/integrations/chat/ollama.ipynb @@ -43,11 +43,13 @@ "You can easily access models in a few ways:\n", "\n", "1/ if the app is running:\n", + "\n", "* All of your local models are automatically served on `localhost:11434`\n", "* Select your model when setting `llm = Ollama(..., model=\":\")`\n", "* If you set `llm = Ollama(..., model=\" folder id is `\"1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5\"`\n", "* Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is `\"1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw\"`" ] diff --git a/docs/docs/integrations/document_transformers/doctran_extract_properties.ipynb b/docs/docs/integrations/document_transformers/doctran_extract_properties.ipynb index 597516e0657..bce5054b938 100644 --- a/docs/docs/integrations/document_transformers/doctran_extract_properties.ipynb +++ b/docs/docs/integrations/document_transformers/doctran_extract_properties.ipynb @@ -9,6 +9,7 @@ "We can extract useful features of documents using the [Doctran](https://github.com/psychic-api/doctran) library, which uses OpenAI's function calling feature to extract specific metadata.\n", "\n", "Extracting metadata from documents is helpful for a variety of tasks, including:\n", + "\n", "* **Classification:** classifying documents into different categories\n", "* **Data mining:** Extract structured data that can be used for data analysis\n", "* **Style transfer:** Change the way text is written to more closely match expected user input, improving vector search results" diff --git a/docs/docs/integrations/llms/databricks.ipynb b/docs/docs/integrations/llms/databricks.ipynb index 4c4ba91fc5e..d9dced42548 100644 --- a/docs/docs/integrations/llms/databricks.ipynb +++ b/docs/docs/integrations/llms/databricks.ipynb @@ -19,6 +19,7 @@ "\n", "This example notebook shows how to wrap Databricks endpoints as LLMs in LangChain.\n", "It supports two endpoint types:\n", + "\n", "* Serving endpoint, recommended for production and development,\n", "* Cluster driver proxy app, recommended for interactive development." ] @@ -48,9 +49,7 @@ "source": [ "## Wrapping a serving endpoint: External model\n", "\n", - "Prerequisite:\n", - "\n", - "- Register an OpenAI API key as a secret:\n", + "Prerequisite: Register an OpenAI API key as a secret:\n", "\n", " ```bash\n", " databricks secrets create-scope \n", @@ -159,10 +158,12 @@ "## Wrapping a serving endpoint: Custom model\n", "\n", "Prerequisites:\n", + "\n", "* An LLM was registered and deployed to [a Databricks serving endpoint](https://docs.databricks.com/machine-learning/model-serving/index.html).\n", "* You have [\"Can Query\" permission](https://docs.databricks.com/security/auth-authz/access-control/serving-endpoint-acl.html) to the endpoint.\n", "\n", "The expected MLflow model signature is:\n", + "\n", " * inputs: `[{\"name\": \"prompt\", \"type\": \"string\"}, {\"name\": \"stop\", \"type\": \"list[string]\"}]`\n", " * outputs: `[{\"type\": \"string\"}]`\n", "\n", @@ -381,12 +382,14 @@ "## Wrapping a cluster driver proxy app\n", "\n", "Prerequisites:\n", + "\n", "* An LLM loaded on a Databricks interactive cluster in \"single user\" or \"no isolation shared\" mode.\n", "* A local HTTP server running on the driver node to serve the model at `\"/\"` using HTTP POST with JSON input/output.\n", "* It uses a port number between `[3000, 8000]` and listens to the driver IP address or simply `0.0.0.0` instead of localhost only.\n", "* You have \"Can Attach To\" permission to the cluster.\n", "\n", "The expected server schema (using JSON schema) is:\n", + "\n", "* inputs:\n", " ```json\n", " {\"type\": \"object\",\n", diff --git a/docs/docs/integrations/llms/ollama.ipynb b/docs/docs/integrations/llms/ollama.ipynb index 969f99ff520..e6bd2194488 100644 --- a/docs/docs/integrations/llms/ollama.ipynb +++ b/docs/docs/integrations/llms/ollama.ipynb @@ -34,11 +34,13 @@ "You can easily access models in a few ways:\n", "\n", "1/ if the app is running:\n", + "\n", "* All of your local models are automatically served on `localhost:11434`\n", "* Select your model when setting `llm = Ollama(..., model=\":\")`\n", "* If you set `llm = Ollama(..., model=\" folder id is `\"1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5\"`\n", "* Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is `\"1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw\"`\n", "\n", diff --git a/docs/docs/integrations/toolkits/github.ipynb b/docs/docs/integrations/toolkits/github.ipynb index 783b83c4840..4949baa1a4b 100644 --- a/docs/docs/integrations/toolkits/github.ipynb +++ b/docs/docs/integrations/toolkits/github.ipynb @@ -10,6 +10,7 @@ "The tool is a wrapper for the [PyGitHub](https://github.com/PyGithub/PyGithub) library. \n", "\n", "## Quickstart\n", + "\n", "1. Install the pygithub library\n", "2. Create a Github app\n", "3. Set your environmental variables\n", @@ -69,6 +70,7 @@ "### 2. Create a Github App\n", "\n", "[Follow the instructions here](https://docs.github.com/en/apps/creating-github-apps/registering-a-github-app/registering-a-github-app) to create and register a Github app. Make sure your app has the following [repository permissions:](https://docs.github.com/en/rest/overview/permissions-required-for-github-apps?apiVersion=2022-11-28)\n", + "\n", "* Commit statuses (read only)\n", "* Contents (read and write)\n", "* Issues (read and write)\n", diff --git a/docs/docs/integrations/toolkits/gitlab.ipynb b/docs/docs/integrations/toolkits/gitlab.ipynb index 83b40ffdd6a..0e62f90e557 100644 --- a/docs/docs/integrations/toolkits/gitlab.ipynb +++ b/docs/docs/integrations/toolkits/gitlab.ipynb @@ -69,6 +69,7 @@ "### 2. Create a Gitlab personal access token\n", "\n", "[Follow the instructions here](https://docs.gitlab.com/ee/user/profile/personal_access_tokens.html) to create a Gitlab personal access token. Make sure your app has the following repository permissions:\n", + "\n", "* read_api\n", "* read_repository\n", "* write_repository\n", diff --git a/docs/docs/integrations/toolkits/google_drive.ipynb b/docs/docs/integrations/toolkits/google_drive.ipynb index e89c5b99fea..9832af659c3 100644 --- a/docs/docs/integrations/toolkits/google_drive.ipynb +++ b/docs/docs/integrations/toolkits/google_drive.ipynb @@ -38,6 +38,7 @@ "metadata": {}, "source": [ "You can obtain your folder and document id from the URL:\n", + "\n", "* Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is `\"1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5\"`\n", "* Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is `\"1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw\"`\n", "\n", diff --git a/docs/docs/integrations/tools/google_drive.ipynb b/docs/docs/integrations/tools/google_drive.ipynb index 2392dbaf746..cbafe83f605 100644 --- a/docs/docs/integrations/tools/google_drive.ipynb +++ b/docs/docs/integrations/tools/google_drive.ipynb @@ -38,6 +38,7 @@ "metadata": {}, "source": [ "You can obtain your folder and document id from the URL:\n", + "\n", "* Folder: https://drive.google.com/drive/u/0/folders/1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5 -> folder id is `\"1yucgL9WGgWZdM1TOuKkeghlPizuzMYb5\"`\n", "* Document: https://docs.google.com/document/d/1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw/edit -> document id is `\"1bfaMQ18_i56204VaQDVeAFpqEijJTgvurupdEDiaUQw\"`\n", "\n", diff --git a/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb b/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb index de2ca31617c..d66c1870bac 100644 --- a/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb +++ b/docs/docs/integrations/vectorstores/mongodb_atlas.ipynb @@ -23,6 +23,7 @@ "metadata": {}, "source": [ "> Note: \n", + ">\n", ">* This feature is Generally Available and ready for production deployments.\n", ">* The langchain version 0.0.305 ([release notes](https://github.com/langchain-ai/langchain/releases/tag/v0.0.305)) introduces the support for $vectorSearch MQL stage, which is available with MongoDB Atlas 6.0.11 and 7.0.2. Users utilizing earlier versions of MongoDB Atlas need to pin their LangChain version to <=0.0.304\n", "> \n", diff --git a/docs/docs/integrations/vectorstores/vectara.ipynb b/docs/docs/integrations/vectorstores/vectara.ipynb index 4d752e76f90..4cb33eab63f 100644 --- a/docs/docs/integrations/vectorstores/vectara.ipynb +++ b/docs/docs/integrations/vectorstores/vectara.ipynb @@ -68,6 +68,7 @@ "In this example, we assume that you've created an account and a corpus, and added your VECTARA_CUSTOMER_ID, VECTARA_CORPUS_ID and VECTARA_API_KEY (created with permissions for both indexing and query) as environment variables.\n", "\n", "The corpus has 3 fields defined as metadata for filtering:\n", + "\n", "* url: a string field containing the source URL of the document (where relevant)\n", "* speech: a string field containing the name of the speech\n", "* author: the name of the author\n", @@ -136,6 +137,7 @@ "To use this, we added the add_files() method (as well as from_files()). \n", "\n", "Let's see this in action. We pick two PDF documents to upload: \n", + "\n", "1. The \"I have a dream\" speech by Dr. King\n", "2. Churchill's \"We Shall Fight on the Beaches\" speech" ] diff --git a/docs/docs/modules/chains/index.ipynb b/docs/docs/modules/chains/index.ipynb index deb3f1f79aa..f95f2f13fa1 100644 --- a/docs/docs/modules/chains/index.ipynb +++ b/docs/docs/modules/chains/index.ipynb @@ -30,6 +30,7 @@ "## LCEL\n", "\n", "The most visible part of LCEL is that it provides an intuitive and readable syntax for composition. But more importantly, it also provides first-class support for:\n", + "\n", "* [streaming](/docs/expression_language/interface#stream),\n", "* [async calls](/docs/expression_language/interface#async-stream),\n", "* [batching](/docs/expression_language/interface#batch),\n", diff --git a/docs/docs/use_cases/chatbots.ipynb b/docs/docs/use_cases/chatbots.ipynb index 711dedd7a44..105354c9ba0 100644 --- a/docs/docs/use_cases/chatbots.ipynb +++ b/docs/docs/use_cases/chatbots.ipynb @@ -203,6 +203,7 @@ "## Memory \n", "\n", "As we mentioned above, the core component of chatbots is the memory system. One of the simplest and most commonly used forms of memory is `ConversationBufferMemory`:\n", + "\n", "* This memory allows for storing of messages in a `buffer`\n", "* When called in a chain, it returns all of the messages it has stored\n", "\n",