Restructure docs (#11620)
2
.github/workflows/doc_lint.yml
vendored
@@ -19,4 +19,4 @@ jobs:
|
|||||||
run: |
|
run: |
|
||||||
# We should not encourage imports directly from main init file
|
# We should not encourage imports directly from main init file
|
||||||
# Expect for hub
|
# Expect for hub
|
||||||
git grep 'from langchain import' docs/{extras,docs_skeleton,snippets} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
|
git grep 'from langchain import' docs/{docs,snippets} | grep -vE 'from langchain import (hub)' && exit 1 || exit 0
|
||||||
|
6
.gitignore
vendored
@@ -174,6 +174,6 @@ docs/api_reference/*/
|
|||||||
!docs/api_reference/_static/
|
!docs/api_reference/_static/
|
||||||
!docs/api_reference/templates/
|
!docs/api_reference/templates/
|
||||||
!docs/api_reference/themes/
|
!docs/api_reference/themes/
|
||||||
docs/docs_skeleton/build
|
docs/docs/build
|
||||||
docs/docs_skeleton/node_modules
|
docs/docs/node_modules
|
||||||
docs/docs_skeleton/yarn.lock
|
docs/docs/yarn.lock
|
||||||
|
4
.gitmodules
vendored
@@ -1,4 +0,0 @@
|
|||||||
[submodule "docs/_docs_skeleton"]
|
|
||||||
path = docs/_docs_skeleton
|
|
||||||
url = https://github.com/langchain-ai/langchain-shared-docs
|
|
||||||
branch = main
|
|
2
Makefile
@@ -18,7 +18,7 @@ docs_clean:
|
|||||||
rm -r docs/_dist
|
rm -r docs/_dist
|
||||||
|
|
||||||
docs_linkcheck:
|
docs_linkcheck:
|
||||||
poetry run linkchecker docs/_dist/docs_skeleton/ --ignore-url node_modules
|
poetry run linkchecker docs/_dist/docs/ --ignore-url node_modules
|
||||||
|
|
||||||
api_docs_build:
|
api_docs_build:
|
||||||
poetry run python docs/api_reference/create_api_rst.py
|
poetry run python docs/api_reference/create_api_rst.py
|
||||||
|
@@ -8,10 +8,10 @@ set -o xtrace
|
|||||||
SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)"
|
SCRIPT_DIR="$(cd "$(dirname "$0")"; pwd)"
|
||||||
cd "${SCRIPT_DIR}"
|
cd "${SCRIPT_DIR}"
|
||||||
|
|
||||||
mkdir -p _dist/docs_skeleton
|
mkdir -p ../_dist
|
||||||
cp -r {docs_skeleton,snippets} _dist
|
cp -r . ../_dist
|
||||||
cd _dist/docs_skeleton
|
cd ../_dist
|
||||||
poetry run nbdoc_build
|
poetry run nbdoc_build --srcdir docs
|
||||||
poetry run python generate_api_reference_links.py
|
poetry run python scripts/generate_api_reference_links.py
|
||||||
yarn install
|
yarn install
|
||||||
yarn start
|
yarn start
|
||||||
|
Before Width: | Height: | Size: 559 KiB After Width: | Height: | Size: 559 KiB |
Before Width: | Height: | Size: 157 KiB After Width: | Height: | Size: 157 KiB |
Before Width: | Height: | Size: 235 KiB After Width: | Height: | Size: 235 KiB |
Before Width: | Height: | Size: 148 KiB After Width: | Height: | Size: 148 KiB |
Before Width: | Height: | Size: 3.5 MiB After Width: | Height: | Size: 3.5 MiB |
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 18 KiB |
Before Width: | Height: | Size: 85 KiB After Width: | Height: | Size: 85 KiB |
Before Width: | Height: | Size: 16 KiB After Width: | Height: | Size: 16 KiB |
Before Width: | Height: | Size: 542 B After Width: | Height: | Size: 542 B |
Before Width: | Height: | Size: 1.2 KiB After Width: | Height: | Size: 1.2 KiB |
Before Width: | Height: | Size: 15 KiB After Width: | Height: | Size: 15 KiB |
Before Width: | Height: | Size: 103 KiB After Width: | Height: | Size: 103 KiB |
Before Width: | Height: | Size: 136 KiB After Width: | Height: | Size: 136 KiB |
Before Width: | Height: | Size: 34 KiB After Width: | Height: | Size: 34 KiB |
@@ -6,7 +6,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Custom Pairwise Evaluator\n",
|
"# Custom Pairwise Evaluator\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/evaluation/comparison/custom.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/custom.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"You can make your own pairwise string evaluators by inheriting from `PairwiseStringEvaluator` class and overwriting the `_evaluate_string_pairs` method (and the `_aevaluate_string_pairs` method if you want to use the evaluator asynchronously).\n",
|
"You can make your own pairwise string evaluators by inheriting from `PairwiseStringEvaluator` class and overwriting the `_evaluate_string_pairs` method (and the `_aevaluate_string_pairs` method if you want to use the evaluator asynchronously).\n",
|
||||||
"\n",
|
"\n",
|
@@ -8,7 +8,7 @@
|
|||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"# Pairwise Embedding Distance \n",
|
"# Pairwise Embedding Distance \n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/evaluation/comparison/pairwise_embedding_distance.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/pairwise_embedding_distance.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"One way to measure the similarity (or dissimilarity) between two predictions on a shared or similar input is to embed the predictions and compute a vector distance between the two embeddings.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n",
|
"One way to measure the similarity (or dissimilarity) between two predictions on a shared or similar input is to embed the predictions and compute a vector distance between the two embeddings.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n",
|
||||||
"\n",
|
"\n",
|
@@ -6,7 +6,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Pairwise String Comparison\n",
|
"# Pairwise String Comparison\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/evaluation/comparison/pairwise_string.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/pairwise_string.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Often you will want to compare predictions of an LLM, Chain, or Agent for a given input. The `StringComparison` evaluators facilitate this so you can answer questions like:\n",
|
"Often you will want to compare predictions of an LLM, Chain, or Agent for a given input. The `StringComparison` evaluators facilitate this so you can answer questions like:\n",
|
||||||
"\n",
|
"\n",
|
@@ -5,7 +5,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Comparing Chain Outputs\n",
|
"# Comparing Chain Outputs\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/evaluation/examples/comparisons.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/examples/comparisons.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Suppose you have two different prompts (or LLMs). How do you know which will generate \"better\" results?\n",
|
"Suppose you have two different prompts (or LLMs). How do you know which will generate \"better\" results?\n",
|
||||||
"\n",
|
"\n",
|
@@ -6,7 +6,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Criteria Evaluation\n",
|
"# Criteria Evaluation\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/evaluation/string/criteria_eval_chain.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/criteria_eval_chain.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In scenarios where you wish to assess a model's output using a specific rubric or criteria set, the `criteria` evaluator proves to be a handy tool. It allows you to verify if an LLM or Chain's output complies with a defined set of criteria.\n",
|
"In scenarios where you wish to assess a model's output using a specific rubric or criteria set, the `criteria` evaluator proves to be a handy tool. It allows you to verify if an LLM or Chain's output complies with a defined set of criteria.\n",
|
||||||
"\n",
|
"\n",
|
@@ -6,7 +6,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Custom String Evaluator\n",
|
"# Custom String Evaluator\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/evaluation/string/custom.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/custom.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"You can make your own custom string evaluators by inheriting from the `StringEvaluator` class and implementing the `_evaluate_strings` (and `_aevaluate_strings` for async support) methods.\n",
|
"You can make your own custom string evaluators by inheriting from the `StringEvaluator` class and implementing the `_evaluate_strings` (and `_aevaluate_strings` for async support) methods.\n",
|
||||||
"\n",
|
"\n",
|
@@ -7,7 +7,7 @@
|
|||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"# Embedding Distance\n",
|
"# Embedding Distance\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/evaluation/string/embedding_distance.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/embedding_distance.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"To measure semantic similarity (or dissimilarity) between a prediction and a reference label string, you could use a vector vector distance metric the two embedded representations using the `embedding_distance` evaluator.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n",
|
"To measure semantic similarity (or dissimilarity) between a prediction and a reference label string, you could use a vector vector distance metric the two embedded representations using the `embedding_distance` evaluator.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n",
|
||||||
"\n",
|
"\n",
|
@@ -6,7 +6,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Exact Match\n",
|
"# Exact Match\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/evaluation/string/exact_match.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/exact_match.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Probably the simplest ways to evaluate an LLM or runnable's string output against a reference label is by a simple string equivalence.\n",
|
"Probably the simplest ways to evaluate an LLM or runnable's string output against a reference label is by a simple string equivalence.\n",
|
||||||
"\n",
|
"\n",
|
@@ -6,7 +6,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Regex Match\n",
|
"# Regex Match\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/evaluation/string/regex_match.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/regex_match.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"To evaluate chain or runnable string predictions against a custom regex, you can use the `regex_match` evaluator."
|
"To evaluate chain or runnable string predictions against a custom regex, you can use the `regex_match` evaluator."
|
||||||
]
|
]
|
@@ -6,7 +6,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# String Distance\n",
|
"# String Distance\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/evaluation/string/string_distance.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/string_distance.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"One of the simplest ways to compare an LLM or chain's string output against a reference label is by using string distance measurements such as Levenshtein or postfix distance. This can be used alongside approximate/fuzzy matching criteria for very basic unit testing.\n",
|
"One of the simplest ways to compare an LLM or chain's string output against a reference label is by using string distance measurements such as Levenshtein or postfix distance. This can be used alongside approximate/fuzzy matching criteria for very basic unit testing.\n",
|
||||||
"\n",
|
"\n",
|
@@ -6,7 +6,7 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"# Custom Trajectory Evaluator\n",
|
"# Custom Trajectory Evaluator\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/evaluation/trajectory/custom.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/trajectory/custom.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"You can make your own custom trajectory evaluators by inheriting from the [AgentTrajectoryEvaluator](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.AgentTrajectoryEvaluator.html#langchain.evaluation.schema.AgentTrajectoryEvaluator) class and overwriting the `_evaluate_agent_trajectory` (and `_aevaluate_agent_action`) method.\n",
|
"You can make your own custom trajectory evaluators by inheriting from the [AgentTrajectoryEvaluator](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.AgentTrajectoryEvaluator.html#langchain.evaluation.schema.AgentTrajectoryEvaluator) class and overwriting the `_evaluate_agent_trajectory` (and `_aevaluate_agent_action`) method.\n",
|
||||||
"\n",
|
"\n",
|
@@ -8,7 +8,7 @@
|
|||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"# Agent Trajectory\n",
|
"# Agent Trajectory\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/evaluation/trajectory/trajectory_eval.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/trajectory/trajectory_eval.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Agents can be difficult to holistically evaluate due to the breadth of actions and generation they can make. We recommend using multiple evaluation techniques appropriate to your use case. One way to evaluate an agent is to look at the whole trajectory of actions taken along with their responses.\n",
|
"Agents can be difficult to holistically evaluate due to the breadth of actions and generation they can make. We recommend using multiple evaluation techniques appropriate to your use case. One way to evaluate an agent is to look at the whole trajectory of actions taken along with their responses.\n",
|
||||||
"\n",
|
"\n",
|
Before Width: | Height: | Size: 766 KiB After Width: | Height: | Size: 766 KiB |
Before Width: | Height: | Size: 815 KiB After Width: | Height: | Size: 815 KiB |
@@ -8,7 +8,7 @@
|
|||||||
},
|
},
|
||||||
"source": [
|
"source": [
|
||||||
"# LangSmith Walkthrough\n",
|
"# LangSmith Walkthrough\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/langsmith/walkthrough.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/langsmith/walkthrough.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"LangChain makes it easy to prototype LLM applications and Agents. However, delivering LLM applications to production can be deceptively difficult. You will likely have to heavily customize and iterate on your prompts, chains, and other components to create a high-quality product.\n",
|
"LangChain makes it easy to prototype LLM applications and Agents. However, delivering LLM applications to production can be deceptively difficult. You will likely have to heavily customize and iterate on your prompts, chains, and other components to create a high-quality product.\n",
|
||||||
"\n",
|
"\n",
|
@@ -6,7 +6,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Data anonymization with Microsoft Presidio\n",
|
"# Data anonymization with Microsoft Presidio\n",
|
||||||
"\n",
|
"\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/privacy/presidio_data_anonymization/index.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/index.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"## Use case\n",
|
"## Use case\n",
|
||||||
"\n",
|
"\n",
|
@@ -6,7 +6,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Mutli-language data anonymization with Microsoft Presidio\n",
|
"# Mutli-language data anonymization with Microsoft Presidio\n",
|
||||||
"\n",
|
"\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/privacy/presidio_data_anonymization/multi_language.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/multi_language.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"## Use case\n",
|
"## Use case\n",
|
@@ -6,7 +6,7 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Reversible data anonymization with Microsoft Presidio\n",
|
"# Reversible data anonymization with Microsoft Presidio\n",
|
||||||
"\n",
|
"\n",
|
||||||
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs_skeleton/docs/guides/privacy/presidio_data_anonymization/reversible.ipynb)\n",
|
"[](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/reversible.ipynb)\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
"\n",
|
||||||
"## Use case\n",
|
"## Use case\n",
|