Compare commits

...

15 Commits

Author SHA1 Message Date
Bagatur
991b49bbea try stat 2024-01-04 14:38:54 -05:00
Bagatur
2f007425ce fmt 2024-01-04 14:22:02 -05:00
Bagatur
5f0b45f28f replace existing 2024-01-04 14:21:19 -05:00
Bagatur
376d443f39 make command 2024-01-04 14:13:21 -05:00
Bagatur
5ca10f2ff8 check in updated dates 2024-01-04 14:06:07 -05:00
Bagatur
a16c385fc9 fmt 2024-01-04 13:35:47 -05:00
Bagatur
79630f9710 fmt 2024-01-04 13:34:20 -05:00
Bagatur
1468117ccc fmt 2024-01-04 12:58:33 -05:00
Bagatur
bdd313dc9f italicize 2024-01-04 12:56:30 -05:00
Bagatur
60884471a6 fmt 2024-01-04 12:55:08 -05:00
Bagatur
04ea42c723 fmt 2024-01-04 12:45:18 -05:00
Bagatur
87df56c0c0 nit 2024-01-04 12:43:01 -05:00
Bagatur
9bbd0f8cc3 nit 2024-01-04 12:41:33 -05:00
Bagatur
3846d411ec fmt 2024-01-04 12:28:27 -05:00
Bagatur
9b0833812a docs: add last updated dates 2024-01-04 12:28:12 -05:00
1024 changed files with 2060 additions and 963 deletions

View File

@@ -20,6 +20,9 @@ docs_clean:
docs_linkcheck:
poetry run linkchecker _dist/docs/ --ignore-url node_modules
docs_last_updated:
poetry run python docs/scripts/last_updated.py
api_docs_build:
poetry run python docs/api_reference/create_api_rst.py
cd docs/api_reference && poetry run make html

View File

@@ -2,6 +2,7 @@
[comment: Use this template to create a new .md file in "docs/integrations/"]::
# Title_REPLACE_ME
*Last updated: 2024-01-02*
[comment: Only one Tile/H1 is allowed!]::

View File

@@ -1,4 +1,5 @@
# Dependents
*Last updated: 2023-12-08*
Dependents stats for `langchain-ai/langchain`

View File

@@ -1,4 +1,5 @@
# Tutorials
*Last updated: 2023-12-22*
Below are links to tutorials and courses on LangChain. For written guides on common use cases for LangChain, check out the [use cases guides](/docs/use_cases).

View File

@@ -1,4 +1,5 @@
# YouTube videos
*Last updated: 2023-10-12*
⛓ icon marks a new addition [last update 2023-09-21]

View File

@@ -1,4 +1,5 @@
# Community navigator
*Last updated: 2023-12-17*
Hi! Thanks for being here. Were lucky to have a community of so many passionate developers building with LangChainwe have so much to teach and learn from each other. Community members contribute code, host meetups, write blog posts, amplify each others work, become each other's customers and collaborators, and so much more.

View File

@@ -2,6 +2,7 @@
sidebar_position: 1
---
# Contribute Code
*Last updated: 2023-12-18*
To contribute to this project, please follow the ["fork and pull request"](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) workflow.
Please do not try to push directly to this repo unless you are a maintainer.

View File

@@ -2,6 +2,7 @@
sidebar_position: 3
---
# Contribute Documentation
*Last updated: 2023-12-17*
The docs directory contains Documentation and API Reference.

View File

@@ -3,6 +3,7 @@ sidebar_position: 6
sidebar_label: FAQ
---
# Frequently Asked Questions
*Last updated: 2024-01-03*
## Pull Requests (PRs)

View File

@@ -2,6 +2,7 @@
sidebar_position: 0
---
# Welcome Contributors
*Last updated: 2024-01-03*
Hi there! Thank you for even being interested in contributing to LangChain.
As an open-source project in a rapidly developing field, we are extremely open to contributions, whether they involve new features, improved infrastructure, better documentation, or bug fixes.

View File

@@ -2,6 +2,7 @@
sidebar_position: 5
---
# Contribute Integrations
*Last updated: 2023-12-19*
To begin, make sure you have all the dependencies outlined in guide on [Contributing Code](./code).

View File

@@ -4,6 +4,7 @@ sidebar_position: 4
---
# 📕 Package Versioning
*Last updated: 2023-12-17*
As of now, LangChain has an ad hoc release process: releases are cut with high frequency by
a maintainer and published to [PyPI](https://pypi.org/).

View File

@@ -3,6 +3,7 @@ sidebar_position: 2
---
# Testing
*Last updated: 2023-12-17*
All of our packages have unit tests and integration tests, and we favor unit tests over integration tests.

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Agents\n",
"*Last updated: 2024-01-02*\n",
"\n",
"You can pass a Runnable into an agent."
]

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Code writing\n",
"*Last updated: 2024-01-02*\n",
"\n",
"Example of how to use LCEL to write Python code."
]

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Routing by semantic similarity\n",
"*Last updated: 2024-01-02*\n",
"\n",
"With LCEL you can easily add [custom routing logic](/docs/expression_language/how_to/routing#using-a-custom-function) to your chain to dynamically determine the chain logic based on user input. All you need to do is define a function that given an input returns a `Runnable`.\n",
"\n",

View File

@@ -3,6 +3,7 @@ sidebar_position: 3
---
# Cookbook
*Last updated: 2023-12-01*
import DocCardList from "@theme/DocCardList";

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Adding memory\n",
"*Last updated: 2024-01-02*\n",
"\n",
"This shows how to add memory to an arbitrary chain. Right now, you can use the memory classes but need to hook it up manually"
]

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Adding moderation\n",
"*Last updated: 2024-01-02*\n",
"\n",
"This shows how to add in moderation (or other safeguards) around your LLM application."
]

View File

@@ -16,6 +16,8 @@
"id": "0f2bf8d3",
"metadata": {},
"source": [
"*Last updated: 2024-01-02*\n",
"\n",
"Runnables can easily be used to string together multiple Chains"
]
},

View File

@@ -16,6 +16,8 @@
"id": "9a434f2b-9405-468c-9dfd-254d456b57a6",
"metadata": {},
"source": [
"*Last updated: 2024-01-02*\n",
"\n",
"The most common and valuable composition is taking:\n",
"\n",
"``PromptTemplate`` / ``ChatPromptTemplate`` -> ``LLM`` / ``ChatModel`` -> ``OutputParser``\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Managing prompt size\n",
"*Last updated: 2024-01-03*\n",
"\n",
"Agents dynamically call tools. The results of those tool calls are added back to the prompt, so that the agent can plan the next action. Depending on what tools are being used and how they're being called, the agent prompt can easily grow larger than the model context window.\n",
"\n",

View File

@@ -16,6 +16,8 @@
"id": "91c5ef3d",
"metadata": {},
"source": [
"*Last updated: 2024-01-02*\n",
"\n",
"Let's look at adding in a retrieval step to a prompt and LLM, which adds up to a \"retrieval-augmented generation\" chain"
]
},

View File

@@ -16,6 +16,8 @@
"id": "506e9636",
"metadata": {},
"source": [
"*Last updated: 2024-01-03*\n",
"\n",
"We can replicate our SQLDatabaseChain with Runnables."
]
},

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Using tools\n",
"*Last updated: 2024-01-02*\n",
"\n",
"You can use any Tools with Runnables easily."
]

View File

@@ -17,6 +17,8 @@
"id": "befa7fd1",
"metadata": {},
"source": [
"*Last updated: 2024-01-02*\n",
"\n",
"LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging."
]
},

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Bind runtime args\n",
"*Last updated: 2024-01-02*\n",
"\n",
"Sometimes we want to invoke a Runnable within a Runnable sequence with constant arguments that are not part of the output of the preceding Runnable in the sequence, and which are not part of the user input. We can use `Runnable.bind()` to easily pass these arguments in.\n",
"\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Configure chain internals at runtime\n",
"*Last updated: 2024-01-02*\n",
"\n",
"Oftentimes you may want to experiment with, or even expose to the end user, multiple different ways of doing things.\n",
"In order to make this experience as easy as possible, we have defined two methods.\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Create a runnable with the `@chain` decorator\n",
"*Last updated: 2024-01-03*\n",
"\n",
"You can also turn an arbitrary function into a chain by adding a `@chain` decorator. This is functionaly equivalent to wrapping in a [`RunnableLambda`](./functions).\n",
"\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Add fallbacks\n",
"*Last updated: 2024-01-02*\n",
"\n",
"There are many possible points of failure in an LLM application, whether that be issues with LLM API's, poor model outputs, issues with other integrations, etc. Fallbacks help you gracefully handle and isolate these issues.\n",
"\n",

View File

@@ -18,6 +18,7 @@
"metadata": {},
"source": [
"# Run custom functions\n",
"*Last updated: 2024-01-03*\n",
"\n",
"You can use arbitrary functions in the pipeline.\n",
"\n",

View File

@@ -5,6 +5,7 @@
"metadata": {},
"source": [
"# Stream custom generator functions\n",
"*Last updated: 2024-01-02*\n",
"\n",
"You can use generator functions (ie. functions that use the `yield` keyword, and behave like iterators) in a LCEL pipeline.\n",
"\n",

View File

@@ -3,6 +3,7 @@ sidebar_position: 2
---
# How to
*Last updated: 2023-12-01*
import DocCardList from "@theme/DocCardList";

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Inspect your runnables\n",
"*Last updated: 2024-01-03*\n",
"\n",
"Once you create a runnable with LCEL, you may often want to inspect it to get a better sense for what is going on. This notebook covers some methods for doing so.\n",
"\n",

View File

@@ -1,8 +1,8 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e2596041-9b76-4e74-836f-e6235086bbf0",
"cell_type": "raw",
"id": "97ac1c41-ae51-42c1-be69-f15016430642",
"metadata": {},
"source": [
"---\n",
@@ -18,6 +18,7 @@
"metadata": {},
"source": [
"# Manipulating inputs & output\n",
"*Last updated: 2024-01-04*\n",
"\n",
"RunnableParallel can be useful for manipulating the output of one Runnable to match the input format of the next Runnable in a sequence.\n",
"\n",
@@ -294,7 +295,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Add message history (memory)\n",
"*Last updated: 2024-01-02*\n",
"\n",
"The `RunnableWithMessageHistory` let's us add message history to certain types of chains.\n",
"\n",

View File

@@ -1,8 +1,8 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d35de667-0352-4bfb-a890-cebe7f676fe7",
"cell_type": "raw",
"id": "c684eadc-a3b3-4ea6-9055-3108de71e8c1",
"metadata": {},
"source": [
"---\n",
@@ -18,6 +18,7 @@
"metadata": {},
"source": [
"# Passing data through\n",
"*Last updated: 2024-01-04*\n",
"\n",
"RunnablePassthrough allows to pass inputs unchanged or with the addition of extra keys. This typically is used in conjuction with RunnableParallel to assign data to a new key in the map. \n",
"\n",
@@ -151,7 +152,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.6"
"version": "3.9.1"
}
},
"nbformat": 4,

View File

@@ -1,8 +1,8 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9e45e81c-e16e-4c6c-b6a3-2362e5193827",
"cell_type": "raw",
"id": "5d4be6f3-15d4-411d-9172-147f6b5e32c2",
"metadata": {},
"source": [
"---\n",
@@ -18,6 +18,7 @@
"metadata": {},
"source": [
"# Dynamically route logic based on input\n",
"*Last updated: 2024-01-04*\n",
"\n",
"This notebook covers how to do routing in the LangChain Expression Language.\n",
"\n",

View File

@@ -3,6 +3,7 @@ sidebar_class_name: hidden
---
# LangChain Expression Language (LCEL)
*Last updated: 2023-11-29*
LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together.
LCEL was designed from day 1 to **support putting prototypes in production, with no code changes**, from the simplest “prompt + LLM” chain to the most complex chains (weve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL:

View File

@@ -16,6 +16,8 @@
"id": "9a9acd2e",
"metadata": {},
"source": [
"*Last updated: 2024-01-02*\n",
"\n",
"To make it as easy as possible to create custom chains, we've implemented a [\"Runnable\"](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.base.Runnable.html#langchain_core.runnables.base.Runnable) protocol. The `Runnable` protocol is implemented for most components. \n",
"This is a standard interface, which makes it easy to define custom chains as well as invoke them in a standard way. \n",
"The standard interface includes:\n",

View File

@@ -18,6 +18,8 @@
"id": "919a5ae2-ed21-4923-b98f-723c111bac67",
"metadata": {},
"source": [
"*Last updated: 2024-01-02*\n",
"\n",
":::tip \n",
"We recommend reading the LCEL [Get started](/docs/expression_language/get_started) section first.\n",
":::"

View File

@@ -1,4 +1,5 @@
# Installation
*Last updated: 2023-12-11*
## Official release

View File

@@ -3,6 +3,7 @@ sidebar_position: 0
---
# Introduction
*Last updated: 2024-01-03*
**LangChain** is a framework for developing applications powered by language models. It enables applications that:
- **Are context-aware**: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)

View File

@@ -1,4 +1,5 @@
# Quickstart
*Last updated: 2024-01-02*
In this quickstart we'll show you how to:
- Get setup with LangChain, LangSmith and LangServe

View File

@@ -1,4 +1,5 @@
# Debugging
*Last updated: 2024-01-02*
If you're building with LLMs, at some point something will break, and you'll need to debug. A model call will fail, or the model output will be misformatted, or there will be some nested model calls and it won't be clear where along the way an incorrect output was created.

View File

@@ -1,4 +1,5 @@
# Deployment
*Last updated: 2023-10-26*
In today's fast-paced technological landscape, the use of Large Language Models (LLMs) is rapidly expanding. As a result, it is crucial for developers to understand how to effectively deploy these models in production environments. LLM interfaces typically fall into two categories:

View File

@@ -1,4 +1,5 @@
# LangChain Templates
*Last updated: 2023-10-31*
For more information on LangChain Templates, visit

View File

@@ -16,6 +16,8 @@
"id": "657d2c8c-54b4-42a3-9f02-bdefa0ed6728",
"metadata": {},
"source": [
"*Last updated: 2024-01-02*\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/custom.ipynb)\n",
"\n",
"You can make your own pairwise string evaluators by inheriting from `PairwiseStringEvaluator` class and overwriting the `_evaluate_string_pairs` method (and the `_aevaluate_string_pairs` method if you want to use the evaluator asynchronously).\n",

View File

@@ -2,6 +2,7 @@
sidebar_position: 3
---
# Comparison Evaluators
*Last updated: 2023-10-10*
Comparison evaluators in LangChain help measure two different chains or LLM outputs. These evaluators are helpful for comparative analyses, such as A/B testing between two language models, or comparing different versions of the same model. They can also be useful for things like generating preference scores for ai-assisted reinforcement learning.

View File

@@ -17,6 +17,8 @@
"tags": []
},
"source": [
"*Last updated: 2024-01-02*\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/pairwise_embedding_distance.ipynb)\n",
"\n",
"One way to measure the similarity (or dissimilarity) between two predictions on a shared or similar input is to embed the predictions and compute a vector distance between the two embeddings.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n",

View File

@@ -16,6 +16,8 @@
"id": "2da95378",
"metadata": {},
"source": [
"*Last updated: 2024-01-02*\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/comparison/pairwise_string.ipynb)\n",
"\n",
"Often you will want to compare predictions of an LLM, Chain, or Agent for a given input. The `StringComparison` evaluators facilitate this so you can answer questions like:\n",

View File

@@ -5,6 +5,7 @@
"metadata": {},
"source": [
"# Comparing Chain Outputs\n",
"*Last updated: 2024-01-03*\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/examples/comparisons.ipynb)\n",
"\n",
"Suppose you have two different prompts (or LLMs). How do you know which will generate \"better\" results?\n",

View File

@@ -2,6 +2,7 @@
sidebar_position: 5
---
# Examples
*Last updated: 2023-10-10*
🚧 _Docs under construction_ 🚧

View File

@@ -1,6 +1,7 @@
import DocCardList from "@theme/DocCardList";
# Evaluation
*Last updated: 2023-12-19*
Building applications with language models involves many moving parts. One of the most critical components is ensuring that the outcomes produced by your models are reliable and useful across a broad array of inputs, and that they work well with your application's other software components. Ensuring reliability usually boils down to some combination of application design, testing & evaluation, and runtime checks.

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Criteria Evaluation\n",
"*Last updated: 2024-01-02*\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/criteria_eval_chain.ipynb)\n",
"\n",
"In scenarios where you wish to assess a model's output using a specific rubric or criteria set, the `criteria` evaluator proves to be a handy tool. It allows you to verify if an LLM or Chain's output complies with a defined set of criteria.\n",
@@ -464,4 +465,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Custom String Evaluator\n",
"*Last updated: 2023-11-14*\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/custom.ipynb)\n",
"\n",
"You can make your own custom string evaluators by inheriting from the `StringEvaluator` class and implementing the `_evaluate_strings` (and `_aevaluate_strings` for async support) methods.\n",
@@ -206,4 +207,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -1,224 +1,225 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"# Embedding Distance\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/embedding_distance.ipynb)\n",
"\n",
"To measure semantic similarity (or dissimilarity) between a prediction and a reference label string, you could use a vector distance metric the two embedded representations using the `embedding_distance` evaluator.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n",
"\n",
"\n",
"**Note:** This returns a **distance** score, meaning that the lower the number, the **more** similar the prediction is to the reference, according to their embedded representation.\n",
"\n",
"Check out the reference docs for the [EmbeddingDistanceEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain.html#langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain) for more info."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.evaluation import load_evaluator\n",
"\n",
"evaluator = load_evaluator(\"embedding_distance\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 0.0966466944859925}"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_strings(prediction=\"I shall go\", reference=\"I shan't go\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 0.03761174337464557}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_strings(prediction=\"I shall go\", reference=\"I will go\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Select the Distance Metric\n",
"\n",
"By default, the evaluator uses cosine distance. You can choose a different distance metric if you'd like. "
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"[<EmbeddingDistance.COSINE: 'cosine'>,\n",
" <EmbeddingDistance.EUCLIDEAN: 'euclidean'>,\n",
" <EmbeddingDistance.MANHATTAN: 'manhattan'>,\n",
" <EmbeddingDistance.CHEBYSHEV: 'chebyshev'>,\n",
" <EmbeddingDistance.HAMMING: 'hamming'>]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.evaluation import EmbeddingDistance\n",
"\n",
"list(EmbeddingDistance)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# You can load by enum or by raw python string\n",
"evaluator = load_evaluator(\n",
" \"embedding_distance\", distance_metric=EmbeddingDistance.EUCLIDEAN\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Select Embeddings to Use\n",
"\n",
"The constructor uses `OpenAI` embeddings by default, but you can configure this however you want. Below, use huggingface local embeddings"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.embeddings import HuggingFaceEmbeddings\n",
"\n",
"embedding_model = HuggingFaceEmbeddings()\n",
"hf_evaluator = load_evaluator(\"embedding_distance\", embeddings=embedding_model)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 0.5486443280477362}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"hf_evaluator.evaluate_strings(prediction=\"I shall go\", reference=\"I shan't go\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 0.21018880025138598}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"hf_evaluator.evaluate_strings(prediction=\"I shall go\", reference=\"I will go\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"cite_note-1\"></a><i>1. Note: When it comes to semantic similarity, this often gives better results than older string distance metrics (such as those in the [StringDistanceEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html#langchain.evaluation.string_distance.base.StringDistanceEvalChain)), though it tends to be less reliable than evaluators that use the LLM directly (such as the [QAEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html#langchain.evaluation.qa.eval_chain.QAEvalChain) or [LabeledCriteriaEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.LabeledCriteriaEvalChain.html#langchain.evaluation.criteria.eval_chain.LabeledCriteriaEvalChain)) </i>"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 4
"cells": [
{
"cell_type": "markdown",
"metadata": {
"tags": []
},
"source": [
"# Embedding Distance\n",
"*Last updated: 2024-01-02*\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/embedding_distance.ipynb)\n",
"\n",
"To measure semantic similarity (or dissimilarity) between a prediction and a reference label string, you could use a vector distance metric the two embedded representations using the `embedding_distance` evaluator.<a name=\"cite_ref-1\"></a>[<sup>[1]</sup>](#cite_note-1)\n",
"\n",
"\n",
"**Note:** This returns a **distance** score, meaning that the lower the number, the **more** similar the prediction is to the reference, according to their embedded representation.\n",
"\n",
"Check out the reference docs for the [EmbeddingDistanceEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain.html#langchain.evaluation.embedding_distance.base.EmbeddingDistanceEvalChain) for more info."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.evaluation import load_evaluator\n",
"\n",
"evaluator = load_evaluator(\"embedding_distance\")"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 0.0966466944859925}"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_strings(prediction=\"I shall go\", reference=\"I shan't go\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 0.03761174337464557}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_strings(prediction=\"I shall go\", reference=\"I will go\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Select the Distance Metric\n",
"\n",
"By default, the evaluator uses cosine distance. You can choose a different distance metric if you'd like. "
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"[<EmbeddingDistance.COSINE: 'cosine'>,\n",
" <EmbeddingDistance.EUCLIDEAN: 'euclidean'>,\n",
" <EmbeddingDistance.MANHATTAN: 'manhattan'>,\n",
" <EmbeddingDistance.CHEBYSHEV: 'chebyshev'>,\n",
" <EmbeddingDistance.HAMMING: 'hamming'>]"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from langchain.evaluation import EmbeddingDistance\n",
"\n",
"list(EmbeddingDistance)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# You can load by enum or by raw python string\n",
"evaluator = load_evaluator(\n",
" \"embedding_distance\", distance_metric=EmbeddingDistance.EUCLIDEAN\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Select Embeddings to Use\n",
"\n",
"The constructor uses `OpenAI` embeddings by default, but you can configure this however you want. Below, use huggingface local embeddings"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain_community.embeddings import HuggingFaceEmbeddings\n",
"\n",
"embedding_model = HuggingFaceEmbeddings()\n",
"hf_evaluator = load_evaluator(\"embedding_distance\", embeddings=embedding_model)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 0.5486443280477362}"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"hf_evaluator.evaluate_strings(prediction=\"I shall go\", reference=\"I shan't go\")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 0.21018880025138598}"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"hf_evaluator.evaluate_strings(prediction=\"I shall go\", reference=\"I will go\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"cite_note-1\"></a><i>1. Note: When it comes to semantic similarity, this often gives better results than older string distance metrics (such as those in the [StringDistanceEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.string_distance.base.StringDistanceEvalChain.html#langchain.evaluation.string_distance.base.StringDistanceEvalChain)), though it tends to be less reliable than evaluators that use the LLM directly (such as the [QAEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html#langchain.evaluation.qa.eval_chain.QAEvalChain) or [LabeledCriteriaEvalChain](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.LabeledCriteriaEvalChain.html#langchain.evaluation.criteria.eval_chain.LabeledCriteriaEvalChain)) </i>"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 4
}

View File

@@ -1,175 +1,176 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2da95378",
"metadata": {},
"source": [
"# Exact Match\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/exact_match.ipynb)\n",
"\n",
"Probably the simplest ways to evaluate an LLM or runnable's string output against a reference label is by a simple string equivalence.\n",
"\n",
"This can be accessed using the `exact_match` evaluator."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "0de44d01-1fea-4701-b941-c4fb74e521e7",
"metadata": {},
"outputs": [],
"source": [
"from langchain.evaluation import ExactMatchStringEvaluator\n",
"\n",
"evaluator = ExactMatchStringEvaluator()"
]
},
{
"cell_type": "markdown",
"id": "fe3baf5f-bfee-4745-bcd6-1a9b422ed46f",
"metadata": {},
"source": [
"Alternatively via the loader:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f6790c46",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.evaluation import load_evaluator\n",
"\n",
"evaluator = load_evaluator(\"exact_match\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "49ad9139",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 0}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_strings(\n",
" prediction=\"1 LLM.\",\n",
" reference=\"2 llm\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1f5e82a3-247e-45a8-85fc-6af53bf7ff82",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 0}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_strings(\n",
" prediction=\"LangChain\",\n",
" reference=\"langchain\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b8ed1f12-09a6-4e90-a69d-c8df525ff293",
"metadata": {},
"source": [
"## Configure the ExactMatchStringEvaluator\n",
"\n",
"You can relax the \"exactness\" when comparing strings."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "0c079864-0175-4d06-9d3f-a0e51dd3977c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"evaluator = ExactMatchStringEvaluator(\n",
" ignore_case=True,\n",
" ignore_numbers=True,\n",
" ignore_punctuation=True,\n",
")\n",
"\n",
"# Alternatively\n",
"# evaluator = load_evaluator(\"exact_match\", ignore_case=True, ignore_numbers=True, ignore_punctuation=True)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a8dfb900-14f3-4a1f-8736-dd1d86a1264c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 1}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_strings(\n",
" prediction=\"1 LLM.\",\n",
" reference=\"2 llm\",\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
"cells": [
{
"cell_type": "markdown",
"id": "2da95378",
"metadata": {},
"source": [
"# Exact Match\n",
"*Last updated: 2023-10-12*\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/exact_match.ipynb)\n",
"\n",
"Probably the simplest ways to evaluate an LLM or runnable's string output against a reference label is by a simple string equivalence.\n",
"\n",
"This can be accessed using the `exact_match` evaluator."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "0de44d01-1fea-4701-b941-c4fb74e521e7",
"metadata": {},
"outputs": [],
"source": [
"from langchain.evaluation import ExactMatchStringEvaluator\n",
"\n",
"evaluator = ExactMatchStringEvaluator()"
]
},
{
"cell_type": "markdown",
"id": "fe3baf5f-bfee-4745-bcd6-1a9b422ed46f",
"metadata": {},
"source": [
"Alternatively via the loader:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f6790c46",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from langchain.evaluation import load_evaluator\n",
"\n",
"evaluator = load_evaluator(\"exact_match\")"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "49ad9139",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 0}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_strings(\n",
" prediction=\"1 LLM.\",\n",
" reference=\"2 llm\",\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "1f5e82a3-247e-45a8-85fc-6af53bf7ff82",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 0}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_strings(\n",
" prediction=\"LangChain\",\n",
" reference=\"langchain\",\n",
")"
]
},
{
"cell_type": "markdown",
"id": "b8ed1f12-09a6-4e90-a69d-c8df525ff293",
"metadata": {},
"source": [
"## Configure the ExactMatchStringEvaluator\n",
"\n",
"You can relax the \"exactness\" when comparing strings."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "0c079864-0175-4d06-9d3f-a0e51dd3977c",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"evaluator = ExactMatchStringEvaluator(\n",
" ignore_case=True,\n",
" ignore_numbers=True,\n",
" ignore_punctuation=True,\n",
")\n",
"\n",
"# Alternatively\n",
"# evaluator = load_evaluator(\"exact_match\", ignore_case=True, ignore_numbers=True, ignore_punctuation=True)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a8dfb900-14f3-4a1f-8736-dd1d86a1264c",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'score': 1}"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"evaluator.evaluate_strings(\n",
" prediction=\"1 LLM.\",\n",
" reference=\"2 llm\",\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}

View File

@@ -2,6 +2,7 @@
sidebar_position: 2
---
# String Evaluators
*Last updated: 2023-10-10*
A string evaluator is a component within LangChain designed to assess the performance of a language model by comparing its generated outputs (predictions) to a reference string or an input. This comparison is a crucial step in the evaluation of language models, providing a measure of the accuracy or quality of the generated text.

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# JSON Evaluators\n",
"*Last updated: 2023-12-08*\n",
"\n",
"Evaluating [extraction](https://python.langchain.com/docs/use_cases/extraction) and function calling applications often comes down to validation that the LLM's string output can be parsed correctly and how it compares to a reference object. The following `JSON` validators provide functionality to check your model's output consistently.\n",
"\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Regex Match\n",
"*Last updated: 2023-10-29*\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/regex_match.ipynb)\n",
"\n",
"To evaluate chain or runnable string predictions against a custom regex, you can use the `regex_match` evaluator."
@@ -240,4 +241,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Scoring Evaluator\n",
"*Last updated: 2024-01-02*\n",
"\n",
"The Scoring Evaluator instructs a language model to assess your model's predictions on a specified scale (default is 1-10) based on your custom criteria or rubric. This feature provides a nuanced evaluation instead of a simplistic binary score, aiding in evaluating models against tailored rubrics and comparing model performance on specific tasks.\n",
"\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# String Distance\n",
"*Last updated: 2023-12-08*\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/string/string_distance.ipynb)\n",
"\n",
">In information theory, linguistics, and computer science, the [Levenshtein distance (Wikipedia)](https://en.wikipedia.org/wiki/Levenshtein_distance) is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. It is named after the Soviet mathematician Vladimir Levenshtein, who considered this distance in 1965.\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Custom Trajectory Evaluator\n",
"*Last updated: 2024-01-02*\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/trajectory/custom.ipynb)\n",
"\n",
"You can make your own custom trajectory evaluators by inheriting from the [AgentTrajectoryEvaluator](https://api.python.langchain.com/en/latest/evaluation/langchain.evaluation.schema.AgentTrajectoryEvaluator.html#langchain.evaluation.schema.AgentTrajectoryEvaluator) class and overwriting the `_evaluate_agent_trajectory` (and `_aevaluate_agent_action`) method.\n",
@@ -140,4 +141,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -2,6 +2,7 @@
sidebar_position: 4
---
# Trajectory Evaluators
*Last updated: 2023-10-10*
Trajectory Evaluators in LangChain provide a more holistic approach to evaluating an agent. These evaluators assess the full sequence of actions taken by an agent and their corresponding responses, which we refer to as the "trajectory". This allows you to better measure an agent's effectiveness and capabilities.

View File

@@ -8,6 +8,7 @@
},
"source": [
"# Agent Trajectory\n",
"*Last updated: 2024-01-02*\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/evaluation/trajectory/trajectory_eval.ipynb)\n",
"\n",
"Agents can be difficult to holistically evaluate due to the breadth of actions and generation they can make. We recommend using multiple evaluation techniques appropriate to your use case. One way to evaluate an agent is to look at the whole trajectory of actions taken along with their responses.\n",
@@ -300,4 +301,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Fallbacks\n",
"*Last updated: 2024-01-02*\n",
"\n",
"When working with language models, you may often encounter issues from the underlying APIs, whether these be rate limiting or downtime. Therefore, as you go to move your LLM applications into production it becomes more and more important to safeguard against these. That's why we've introduced the concept of fallbacks. \n",
"\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Run LLMs locally\n",
"*Last updated: 2024-01-02*\n",
"\n",
"## Use case\n",
"\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Model comparison\n",
"*Last updated: 2024-01-03*\n",
"\n",
"Constructing your language model application will likely involved choosing between many different options of prompts, models, and even chains to use. When doing so, you will want to compare these different options on different inputs in an easy, flexible, and intuitive way. \n",
"\n",

View File

@@ -5,6 +5,7 @@
"metadata": {},
"source": [
"# Data anonymization with Microsoft Presidio\n",
"*Last updated: 2024-01-02*\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/index.ipynb)\n",
"\n",

View File

@@ -15,6 +15,7 @@
"metadata": {},
"source": [
"# Multi-language data anonymization with Microsoft Presidio\n",
"*Last updated: 2023-10-18*\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/multi_language.ipynb)\n",
"\n",

View File

@@ -15,6 +15,7 @@
"metadata": {},
"source": [
"# QA with private data protection\n",
"*Last updated: 2024-01-02*\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/qa_privacy_protection.ipynb)\n",
"\n",

View File

@@ -15,6 +15,7 @@
"metadata": {},
"source": [
"# Reversible data anonymization with Microsoft Presidio\n",
"*Last updated: 2024-01-02*\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/guides/privacy/presidio_data_anonymization/reversible.ipynb)\n",
"\n",

View File

@@ -1,4 +1,5 @@
# Pydantic compatibility
*Last updated: 2023-12-11*
- Pydantic v2 was released in June, 2023 (https://docs.pydantic.dev/2.0/blog/pydantic-v2-final/)
- v2 contains has a number of breaking changes (https://docs.pydantic.dev/2.0/migration/)

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Amazon Comprehend Moderation Chain\n",
"*Last updated: 2024-01-02*\n",
"\n",
">[Amazon Comprehend](https://aws.amazon.com/comprehend/) is a natural-language processing (NLP) service that uses machine learning to uncover valuable insights and connections in text.\n",
"\n",

View File

@@ -1,4 +1,5 @@
# Constitutional chain
*Last updated: 2024-01-02*
This example shows the Self-critique chain with `Constitutional AI`.

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Hugging Face prompt injection identification\n",
"*Last updated: 2024-01-02*\n",
"\n",
"This notebook shows how to prevent prompt injection attacks using the text classification model from `HuggingFace`.\n",
"\n",
@@ -80,7 +81,9 @@
"outputs": [
{
"data": {
"text/plain": "'hugging_face_injection_identifier'"
"text/plain": [
"'hugging_face_injection_identifier'"
]
},
"execution_count": 10,
"metadata": {},
@@ -119,7 +122,9 @@
"outputs": [
{
"data": {
"text/plain": "'Name 5 cities with the biggest number of inhabitants'"
"text/plain": [
"'Name 5 cities with the biggest number of inhabitants'"
]
},
"execution_count": 11,
"metadata": {},

View File

@@ -1,4 +1,5 @@
# Safety
*Last updated: 2023-10-16*
One of the key concerns with using LLMs is that they may generate harmful or unethical text. This is an area of active research in the field. Here we present some built-in chains inspired by this research, which are intended to make the outputs of LLMs safer.

View File

@@ -1,4 +1,5 @@
# Logical Fallacy chain
*Last updated: 2024-01-02*
This example shows how to remove logical fallacies from model output.

View File

@@ -1,4 +1,5 @@
# Moderation chain
*Last updated: 2024-01-02*
This notebook walks through examples of how to use a moderation chain, and several common ways for doing so.
Moderation chains are useful for detecting text that could be hateful, violent, etc. This can be useful to apply on both user input, but also on the output of a Language Model.

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# OpenAI Adapter(Old)\n",
"*Last updated: 2023-12-06*\n",
"\n",
"**Please ensure OpenAI library is less than 1.0.0; otherwise, refer to the newer doc [OpenAI Adapter](./openai).**\n",
"\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# OpenAI Adapter\n",
"*Last updated: 2023-12-06*\n",
"\n",
"**Please ensure OpenAI library is version 1.0.0 or higher; otherwise, refer to the older doc [OpenAI Adapter(Old)](./openai-old).**\n",
"\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Argilla\n",
"*Last updated: 2024-01-02*\n",
"\n",
">[Argilla](https://argilla.io/) is an open-source data curation platform for LLMs.\n",
"> Using Argilla, everyone can build robust language models through faster data curation \n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Confident\n",
"*Last updated: 2024-01-02*\n",
"\n",
">[DeepEval](https://confident-ai.com) package for unit testing LLMs.\n",
"> Using Confident, everyone can build robust language models through faster iterations\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Context\n",
"*Last updated: 2024-01-02*\n",
"\n",
">[Context](https://context.ai/) provides user analytics for LLM-powered products and features.\n",
"\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Infino\n",
"*Last updated: 2024-01-02*\n",
"\n",
">[Infino](https://github.com/infinohq/infino) is a scalable telemetry store designed for logs, metrics, and traces. Infino can function as a standalone observability solution or as the storage layer in your observability stack.\n",
"\n",

View File

@@ -13,6 +13,7 @@
},
"source": [
"# Label Studio\n",
"*Last updated: 2024-01-02*\n",
"\n",
"\n",
">[Label Studio](https://labelstud.io/guide/get_started) is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.\n",

View File

@@ -1,4 +1,5 @@
# LLMonitor
*Last updated: 2024-01-02*
>[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# PromptLayer\n",
"*Last updated: 2024-01-02*\n",
"\n",
">[PromptLayer](https://docs.promptlayer.com/introduction) is a platform for prompt engineering. It also helps with the LLM observability to visualize requests, version prompts, and track usage.\n",
">\n",

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# SageMaker Tracking\n",
"*Last updated: 2024-01-02*\n",
"\n",
">[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models. \n",
"\n",

View File

@@ -1,4 +1,5 @@
# Streamlit
*Last updated: 2024-01-02*
> **[Streamlit](https://streamlit.io/) is a faster way to build and share data apps.**
> Streamlit turns data scripts into shareable web apps in minutes. All in pure Python. No frontend experience required.

View File

@@ -8,6 +8,7 @@
},
"source": [
"# Trubrics\n",
"*Last updated: 2024-01-02*\n",
"\n",
"\n",
">[Trubrics](https://trubrics.com) is an LLM user analytics platform that lets you collect, analyse and manage user\n",

View File

@@ -14,6 +14,7 @@
"metadata": {},
"source": [
"# Alibaba Cloud PAI EAS\n",
"*Last updated: 2024-01-02*\n",
"\n",
">[Alibaba Cloud PAI (Platform for AI)](https://www.alibabacloud.com/help/en/pai/?spm=a2c63.p38356.0.0.c26a426ckrxUwZ) is a lightweight and cost-efficient machine learning platform that uses cloud-native technologies. It provides you with an end-to-end modelling service. It accelerates model training based on tens of billions of features and hundreds of billions of samples in more than 100 scenarios.\n",
"\n",

View File

@@ -16,6 +16,7 @@
"metadata": {},
"source": [
"# ChatAnthropic\n",
"*Last updated: 2024-01-02*\n",
"\n",
"This notebook covers how to get started with Anthropic chat models."
]

View File

@@ -6,6 +6,7 @@
"metadata": {},
"source": [
"# Anthropic Functions\n",
"*Last updated: 2023-10-29*\n",
"\n",
"This notebook shows how to use an experimental wrapper around Anthropic that gives it the same API as OpenAI Functions."
]

View File

@@ -17,6 +17,7 @@
"metadata": {},
"source": [
"# ChatAnyscale\n",
"*Last updated: 2024-01-02*\n",
"\n",
"This notebook demonstrates the use of `langchain.chat_models.ChatAnyscale` for [Anyscale Endpoints](https://endpoints.anyscale.com/).\n",
"\n",

View File

@@ -16,6 +16,7 @@
"metadata": {},
"source": [
"# AzureChatOpenAI\n",
"*Last updated: 2024-01-02*\n",
"\n",
">[Azure OpenAI Service](https://learn.microsoft.com/en-us/azure/ai-services/openai/overview) provides REST API access to OpenAI's powerful language models including the GPT-4, GPT-3.5-Turbo, and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation. Users can access the service through REST APIs, Python SDK, or a web-based interface in the Azure OpenAI Studio.\n",
"\n",

View File

@@ -14,6 +14,7 @@
"metadata": {},
"source": [
"# AzureMLChatOnlineEndpoint\n",
"*Last updated: 2024-01-02*\n",
"\n",
">[Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning/) is a platform used to build, train, and deploy machine learning models. Users can explore the types of models to deploy in the Model Catalog, which provides Azure Foundation Models and OpenAI Models. `Azure Foundation Models` include various open-source models and popular Hugging Face models. Users can also import models of their liking into AzureML.\n",
">\n",

View File

@@ -14,6 +14,7 @@
"metadata": {},
"source": [
"# ChatBaichuan\n",
"*Last updated: 2024-01-02*\n",
"\n",
"Baichuan chat models API by Baichuan Intelligent Technology. For more information, see [https://platform.baichuan-ai.com/docs/api](https://platform.baichuan-ai.com/docs/api)"
]
@@ -157,8 +158,7 @@
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
},
"orig_nbformat": 4
}
},
"nbformat": 4,
"nbformat_minor": 2

View File

@@ -15,6 +15,7 @@
"metadata": {},
"source": [
"# QianfanChatEndpoint\n",
"*Last updated: 2024-01-02*\n",
"\n",
"Baidu AI Cloud Qianfan Platform is a one-stop large model development and service operation platform for enterprise developers. Qianfan not only provides including the model of Wenxin Yiyan (ERNIE-Bot) and the third-party open-source models, but also provides various AI development tools and the whole set of development environment, which facilitates customers to use and develop large model applications easily.\n",
"\n",

Some files were not shown because too many files have changed in this diff Show More