mirror of
https://github.com/hwchase17/langchain.git
synced 2025-08-16 08:06:14 +00:00
docs: integrations/providers/
update (#14315)
- added missed provider files (from `integrations/Callbacks` - updated notebooks: added links; updated into consistent formats
This commit is contained in:
parent
6607cc6eab
commit
0f02e94565
@ -7,8 +7,6 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Argilla\n",
|
"# Argilla\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
|
||||||
"\n",
|
|
||||||
">[Argilla](https://argilla.io/) is an open-source data curation platform for LLMs.\n",
|
">[Argilla](https://argilla.io/) is an open-source data curation platform for LLMs.\n",
|
||||||
"> Using Argilla, everyone can build robust language models through faster data curation \n",
|
"> Using Argilla, everyone can build robust language models through faster data curation \n",
|
||||||
"> using both human and machine feedback. We provide support for each step in the MLOps cycle, \n",
|
"> using both human and machine feedback. We provide support for each step in the MLOps cycle, \n",
|
||||||
@ -410,7 +408,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.11.3"
|
"version": "3.10.12"
|
||||||
},
|
},
|
||||||
"vscode": {
|
"vscode": {
|
||||||
"interpreter": {
|
"interpreter": {
|
||||||
|
@ -7,12 +7,9 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Context\n",
|
"# Context\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
">[Context](https://context.ai/) provides user analytics for LLM-powered products and features.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"[Context](https://context.ai/) provides user analytics for LLM powered products and features.\n",
|
"With `Context`, you can start understanding your users and improving their experiences in less than 30 minutes.\n"
|
||||||
"\n",
|
|
||||||
"With Context, you can start understanding your users and improving their experiences in less than 30 minutes.\n",
|
|
||||||
"\n"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -89,11 +86,9 @@
|
|||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"## Usage\n",
|
"## Usage\n",
|
||||||
"### Using the Context callback within a chat model\n",
|
"### Context callback within a chat model\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The Context callback handler can be used to directly record transcripts between users and AI assistants.\n",
|
"The Context callback handler can be used to directly record transcripts between users and AI assistants."
|
||||||
"\n",
|
|
||||||
"#### Example"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -132,7 +127,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Using the Context callback within Chains\n",
|
"### Context callback within Chains\n",
|
||||||
"\n",
|
"\n",
|
||||||
"The Context callback handler can also be used to record the inputs and outputs of chains. Note that intermediate steps of the chain are not recorded - only the starting inputs and final outputs.\n",
|
"The Context callback handler can also be used to record the inputs and outputs of chains. Note that intermediate steps of the chain are not recorded - only the starting inputs and final outputs.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -149,9 +144,7 @@
|
|||||||
">handler = ContextCallbackHandler(token)\n",
|
">handler = ContextCallbackHandler(token)\n",
|
||||||
">chat = ChatOpenAI(temperature=0.9, callbacks=[callback])\n",
|
">chat = ChatOpenAI(temperature=0.9, callbacks=[callback])\n",
|
||||||
">chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])\n",
|
">chain = LLMChain(llm=chat, prompt=chat_prompt_template, callbacks=[callback])\n",
|
||||||
">```\n",
|
">```\n"
|
||||||
"\n",
|
|
||||||
"#### Example"
|
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -203,7 +196,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.9.1"
|
"version": "3.10.12"
|
||||||
},
|
},
|
||||||
"vscode": {
|
"vscode": {
|
||||||
"interpreter": {
|
"interpreter": {
|
||||||
|
@ -7,12 +7,14 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Infino\n",
|
"# Infino\n",
|
||||||
"\n",
|
"\n",
|
||||||
|
">[Infino](https://github.com/infinohq/infino) is a scalable telemetry store designed for logs, metrics, and traces. Infino can function as a standalone observability solution or as the storage layer in your observability stack.\n",
|
||||||
|
"\n",
|
||||||
"This example shows how one can track the following while calling OpenAI and ChatOpenAI models via `LangChain` and [Infino](https://github.com/infinohq/infino):\n",
|
"This example shows how one can track the following while calling OpenAI and ChatOpenAI models via `LangChain` and [Infino](https://github.com/infinohq/infino):\n",
|
||||||
"\n",
|
"\n",
|
||||||
"* prompt input,\n",
|
"* prompt input\n",
|
||||||
"* response from `ChatGPT` or any other `LangChain` model,\n",
|
"* response from `ChatGPT` or any other `LangChain` model\n",
|
||||||
"* latency,\n",
|
"* latency\n",
|
||||||
"* errors,\n",
|
"* errors\n",
|
||||||
"* number of tokens consumed"
|
"* number of tokens consumed"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
@ -454,7 +456,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.9.1"
|
"version": "3.10.12"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
@ -4,6 +4,9 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"collapsed": true,
|
"collapsed": true,
|
||||||
|
"jupyter": {
|
||||||
|
"outputs_hidden": true
|
||||||
|
},
|
||||||
"pycharm": {
|
"pycharm": {
|
||||||
"name": "#%% md\n"
|
"name": "#%% md\n"
|
||||||
}
|
}
|
||||||
@ -11,17 +14,14 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Label Studio\n",
|
"# Label Studio\n",
|
||||||
"\n",
|
"\n",
|
||||||
"<div>\n",
|
|
||||||
"<img src=\"https://labelstudio-pub.s3.amazonaws.com/lc/open-source-data-labeling-platform.png\" width=\"400\"/>\n",
|
|
||||||
"</div>\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"Label Studio is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.\n",
|
">[Label Studio](https://labelstud.io/guide/get_started) is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In this guide, you will learn how to connect a LangChain pipeline to Label Studio to:\n",
|
"In this guide, you will learn how to connect a LangChain pipeline to `Label Studio` to:\n",
|
||||||
"\n",
|
"\n",
|
||||||
"- Aggregate all input prompts, conversations, and responses in a single LabelStudio project. This consolidates all the data in one place for easier labeling and analysis.\n",
|
"- Aggregate all input prompts, conversations, and responses in a single `Label Studio` project. This consolidates all the data in one place for easier labeling and analysis.\n",
|
||||||
"- Refine prompts and responses to create a dataset for supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) scenarios. The labeled data can be used to further train the LLM to improve its performance.\n",
|
"- Refine prompts and responses to create a dataset for supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) scenarios. The labeled data can be used to further train the LLM to improve its performance.\n",
|
||||||
"- Evaluate model responses through human feedback. LabelStudio provides an interface for humans to review and provide feedback on model responses, allowing evaluation and iteration."
|
"- Evaluate model responses through human feedback. `Label Studio` provides an interface for humans to review and provide feedback on model responses, allowing evaluation and iteration."
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -362,9 +362,9 @@
|
|||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "labelops",
|
"display_name": "Python 3 (ipykernel)",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "labelops"
|
"name": "python3"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
@ -376,9 +376,9 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.9.16"
|
"version": "3.10.12"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
"nbformat_minor": 1
|
"nbformat_minor": 4
|
||||||
}
|
}
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
# LLMonitor
|
# LLMonitor
|
||||||
|
|
||||||
[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
|
>[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
|
||||||
|
|
||||||
<video controls width='100%' >
|
<video controls width='100%' >
|
||||||
<source src='https://llmonitor.com/videos/demo-annotated.mp4'/>
|
<source src='https://llmonitor.com/videos/demo-annotated.mp4'/>
|
||||||
|
@ -7,11 +7,10 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# PromptLayer\n",
|
"# PromptLayer\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"[PromptLayer](https://promptlayer.com) is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the `PromptLayerCallbackHandler`. \n",
|
">[PromptLayer](https://promptlayer.com) is a an LLM observability platform that lets you visualize requests, version prompts, and track usage. In this guide we will go over how to setup the `PromptLayerCallbackHandler`. \n",
|
||||||
"\n",
|
"\n",
|
||||||
"While PromptLayer does have LLMs that integrate directly with LangChain (e.g. [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), this callback is the recommended way to integrate PromptLayer with LangChain.\n",
|
"While `PromptLayer` does have LLMs that integrate directly with LangChain (e.g. [`PromptLayerOpenAI`](https://python.langchain.com/docs/integrations/llms/promptlayer_openai)), this callback is the recommended way to integrate PromptLayer with LangChain.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"See [our docs](https://docs.promptlayer.com/languages/langchain) for more information."
|
"See [our docs](https://docs.promptlayer.com/languages/langchain) for more information."
|
||||||
]
|
]
|
||||||
@ -51,7 +50,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Usage\n",
|
"## Usage\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Getting started with `PromptLayerCallbackHandler` is fairly simple, it takes two optional arguments:\n",
|
"Getting started with `PromptLayerCallbackHandler` is fairly simple, it takes two optional arguments:\n",
|
||||||
"1. `pl_tags` - an optional list of strings that will be tracked as tags on PromptLayer.\n",
|
"1. `pl_tags` - an optional list of strings that will be tracked as tags on PromptLayer.\n",
|
||||||
@ -63,7 +62,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Simple OpenAI Example\n",
|
"## Simple OpenAI Example\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In this simple example we use `PromptLayerCallbackHandler` with `ChatOpenAI`. We add a PromptLayer tag named `chatopenai`"
|
"In this simple example we use `PromptLayerCallbackHandler` with `ChatOpenAI`. We add a PromptLayer tag named `chatopenai`"
|
||||||
]
|
]
|
||||||
@ -99,7 +98,7 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### GPT4All Example"
|
"## GPT4All Example"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -125,9 +124,9 @@
|
|||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {},
|
"metadata": {},
|
||||||
"source": [
|
"source": [
|
||||||
"### Full Featured Example\n",
|
"## Full Featured Example\n",
|
||||||
"\n",
|
"\n",
|
||||||
"In this example we unlock more of the power of PromptLayer.\n",
|
"In this example, we unlock more of the power of `PromptLayer`.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"PromptLayer allows you to visually create, version, and track prompt templates. Using the [Prompt Registry](https://docs.promptlayer.com/features/prompt-registry), we can programmatically fetch the prompt template called `example`.\n",
|
"PromptLayer allows you to visually create, version, and track prompt templates. Using the [Prompt Registry](https://docs.promptlayer.com/features/prompt-registry), we can programmatically fetch the prompt template called `example`.\n",
|
||||||
"\n",
|
"\n",
|
||||||
@ -182,7 +181,7 @@
|
|||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "base",
|
"display_name": "Python 3 (ipykernel)",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "python3"
|
"name": "python3"
|
||||||
},
|
},
|
||||||
@ -196,7 +195,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.8.8 (default, Apr 13 2021, 12:59:45) \n[Clang 10.0.0 ]"
|
"version": "3.10.12"
|
||||||
},
|
},
|
||||||
"vscode": {
|
"vscode": {
|
||||||
"interpreter": {
|
"interpreter": {
|
||||||
|
@ -7,14 +7,15 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# SageMaker Tracking\n",
|
"# SageMaker Tracking\n",
|
||||||
"\n",
|
"\n",
|
||||||
"This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into SageMaker Experiments. Here, we use different scenarios to showcase the capability:\n",
|
">[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models. \n",
|
||||||
|
"\n",
|
||||||
|
">[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability of `Amazon SageMaker` that lets you organize, track, compare and evaluate ML experiments and model versions.\n",
|
||||||
|
"\n",
|
||||||
|
"This notebook shows how LangChain Callback can be used to log and track prompts and other LLM hyperparameters into `SageMaker Experiments`. Here, we use different scenarios to showcase the capability:\n",
|
||||||
"* **Scenario 1**: *Single LLM* - A case where a single LLM model is used to generate output based on a given prompt.\n",
|
"* **Scenario 1**: *Single LLM* - A case where a single LLM model is used to generate output based on a given prompt.\n",
|
||||||
"* **Scenario 2**: *Sequential Chain* - A case where a sequential chain of two LLM models is used.\n",
|
"* **Scenario 2**: *Sequential Chain* - A case where a sequential chain of two LLM models is used.\n",
|
||||||
"* **Scenario 3**: *Agent with Tools (Chain of Thought)* - A case where multiple tools (search and math) are used in addition to an LLM.\n",
|
"* **Scenario 3**: *Agent with Tools (Chain of Thought)* - A case where multiple tools (search and math) are used in addition to an LLM.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is a fully managed service that is used to quickly and easily build, train and deploy machine learning (ML) models. \n",
|
|
||||||
"\n",
|
|
||||||
"[Amazon SageMaker Experiments](https://docs.aws.amazon.com/sagemaker/latest/dg/experiments.html) is a capability of Amazon SageMaker that lets you organize, track, compare and evaluate ML experiments and model versions.\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"In this notebook, we will create a single experiment to log the prompts from each scenario."
|
"In this notebook, we will create a single experiment to log the prompts from each scenario."
|
||||||
]
|
]
|
||||||
@ -899,9 +900,9 @@
|
|||||||
],
|
],
|
||||||
"instance_type": "ml.t3.large",
|
"instance_type": "ml.t3.large",
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "conda_pytorch_p310",
|
"display_name": "Python 3 (ipykernel)",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "conda_pytorch_p310"
|
"name": "python3"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
@ -913,7 +914,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.10.10"
|
"version": "3.10.12"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
@ -9,12 +9,13 @@
|
|||||||
"source": [
|
"source": [
|
||||||
"# Trubrics\n",
|
"# Trubrics\n",
|
||||||
"\n",
|
"\n",
|
||||||
"\n",
|
|
||||||
"\n",
|
"\n",
|
||||||
"[Trubrics](https://trubrics.com) is an LLM user analytics platform that lets you collect, analyse and manage user\n",
|
">[Trubrics](https://trubrics.com) is an LLM user analytics platform that lets you collect, analyse and manage user\n",
|
||||||
"prompts & feedback on AI models. In this guide we will go over how to setup the `TrubricsCallbackHandler`. \n",
|
"prompts & feedback on AI models.\n",
|
||||||
|
">\n",
|
||||||
|
">Check out [Trubrics repo](https://github.com/trubrics/trubrics-sdk) for more information on `Trubrics`.\n",
|
||||||
"\n",
|
"\n",
|
||||||
"Check out [our repo](https://github.com/trubrics/trubrics-sdk) for more information on Trubrics."
|
"In this guide, we will go over how to set up the `TrubricsCallbackHandler`. \n"
|
||||||
]
|
]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
@ -347,9 +348,9 @@
|
|||||||
],
|
],
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "langchain",
|
"display_name": "Python 3 (ipykernel)",
|
||||||
"language": "python",
|
"language": "python",
|
||||||
"name": "langchain"
|
"name": "python3"
|
||||||
},
|
},
|
||||||
"language_info": {
|
"language_info": {
|
||||||
"codemirror_mode": {
|
"codemirror_mode": {
|
||||||
@ -361,7 +362,7 @@
|
|||||||
"name": "python",
|
"name": "python",
|
||||||
"nbconvert_exporter": "python",
|
"nbconvert_exporter": "python",
|
||||||
"pygments_lexer": "ipython3",
|
"pygments_lexer": "ipython3",
|
||||||
"version": "3.11.4"
|
"version": "3.10.12"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nbformat": 4,
|
"nbformat": 4,
|
||||||
|
20
docs/docs/integrations/providers/context.mdx
Normal file
20
docs/docs/integrations/providers/context.mdx
Normal file
@ -0,0 +1,20 @@
|
|||||||
|
# Context
|
||||||
|
|
||||||
|
>[Context](https://context.ai/) provides user analytics for LLM-powered products and features.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
We need to install the `context-python` Python package:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install context-python
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Callbacks
|
||||||
|
|
||||||
|
See a [usage example](/docs/integrations/callbacks/context).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.callbacks import ContextCallbackHandler
|
||||||
|
```
|
23
docs/docs/integrations/providers/labelstudio.mdx
Normal file
23
docs/docs/integrations/providers/labelstudio.mdx
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
# Label Studio
|
||||||
|
|
||||||
|
|
||||||
|
>[Label Studio](https://labelstud.io/guide/get_started) is an open-source data labeling platform that provides LangChain with flexibility when it comes to labeling data for fine-tuning large language models (LLMs). It also enables the preparation of custom training data and the collection and evaluation of responses through human feedback.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
See the [Label Studio installation guide](https://labelstud.io/guide/install) for installation options.
|
||||||
|
|
||||||
|
We need to install the `label-studio` and `label-studio-sdk-python` Python packages:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install label-studio label-studio-sdk
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Callbacks
|
||||||
|
|
||||||
|
See a [usage example](/docs/integrations/callbacks/labelstudio).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.callbacks import LabelStudioCallbackHandler
|
||||||
|
```
|
22
docs/docs/integrations/providers/llmonitor.mdx
Normal file
22
docs/docs/integrations/providers/llmonitor.mdx
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
# LLMonitor
|
||||||
|
|
||||||
|
>[LLMonitor](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs) is an open-source observability platform that provides cost and usage analytics, user tracking, tracing and evaluation tools.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
Create an account on [llmonitor.com](https://llmonitor.com?utm_source=langchain&utm_medium=py&utm_campaign=docs), then copy your new app's `tracking id`.
|
||||||
|
|
||||||
|
Once you have it, set it as an environment variable by running:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export LLMONITOR_APP_ID="..."
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Callbacks
|
||||||
|
|
||||||
|
See a [usage example](/docs/integrations/callbacks/llmonitor).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.callbacks import LLMonitorCallbackHandler
|
||||||
|
```
|
22
docs/docs/integrations/providers/streamlit.mdx
Normal file
22
docs/docs/integrations/providers/streamlit.mdx
Normal file
@ -0,0 +1,22 @@
|
|||||||
|
# Streamlit
|
||||||
|
|
||||||
|
> [Streamlit](https://streamlit.io/) is a faster way to build and share data apps.
|
||||||
|
> `Streamlit` turns data scripts into shareable web apps in minutes. All in pure Python. No front‑end experience required.
|
||||||
|
> See more examples at [streamlit.io/generative-ai](https://streamlit.io/generative-ai).
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
We need to install the `context-pstreamlit` Python package:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install context-pstreamlit
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Callbacks
|
||||||
|
|
||||||
|
See a [usage example](/docs/integrations/callbacks/streamlit).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.callbacks import StreamlitCallbackHandler
|
||||||
|
```
|
24
docs/docs/integrations/providers/trubrics.mdx
Normal file
24
docs/docs/integrations/providers/trubrics.mdx
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
# Trubrics
|
||||||
|
|
||||||
|
|
||||||
|
>[Trubrics](https://trubrics.com) is an LLM user analytics platform that lets you collect, analyse and manage user
|
||||||
|
prompts & feedback on AI models.
|
||||||
|
>
|
||||||
|
>Check out [Trubrics repo](https://github.com/trubrics/trubrics-sdk) for more information on `Trubrics`.
|
||||||
|
|
||||||
|
## Installation and Setup
|
||||||
|
|
||||||
|
We need to install the `trubrics` Python package:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pip install trubrics
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Callbacks
|
||||||
|
|
||||||
|
See a [usage example](/docs/integrations/callbacks/trubrics).
|
||||||
|
|
||||||
|
```python
|
||||||
|
from langchain.callbacks import TrubricsCallbackHandler
|
||||||
|
```
|
Loading…
Reference in New Issue
Block a user