mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-05 21:12:48 +00:00
docs: ecosystem/integrations
update 5 (#5752)
- added missed integration to `docs/ecosystem/integrations/` - updated notebooks to consistent format: changed titles, file names; added descriptions #### Who can review? @hwchase17 @dev2049
This commit is contained in:
@@ -1,4 +1,4 @@
|
||||
# Bedrock
|
||||
# Amazon Bedrock
|
||||
|
||||
>[Amazon Bedrock](https://aws.amazon.com/bedrock/) is a fully managed service that makes FMs from leading AI startups and Amazon available via an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case.
|
||||
|
||||
@@ -18,7 +18,7 @@ from langchain import Bedrock
|
||||
|
||||
## Text Embedding Models
|
||||
|
||||
See a [usage example](../modules/models/text_embedding/examples/bedrock.ipynb).
|
||||
See a [usage example](../modules/models/text_embedding/examples/amazon_bedrock.ipynb).
|
||||
```python
|
||||
from langchain.embeddings import BedrockEmbeddings
|
||||
```
|
26
docs/integrations/anthropic.md
Normal file
26
docs/integrations/anthropic.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# Anthropic
|
||||
|
||||
>[Anthropic](https://en.wikipedia.org/wiki/Anthropic) is an American artificial intelligence (AI) startup and
|
||||
> public-benefit corporation, founded by former members of OpenAI. `Anthropic` specializes in developing general AI
|
||||
> systems and language models, with a company ethos of responsible AI usage.
|
||||
> `Anthropic` develops a chatbot, named `Claude`. Similar to `ChatGPT`, `Claude` uses a messaging
|
||||
> interface where users can submit questions or requests and receive highly detailed and relevant responses.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
|
||||
```bash
|
||||
pip install anthropic
|
||||
```
|
||||
|
||||
See the [setup documentation](https://console.anthropic.com/docs/access).
|
||||
|
||||
|
||||
|
||||
## Chat Models
|
||||
|
||||
See a [usage example](../modules/models/chat/integrations/anthropic.ipynb)
|
||||
|
||||
```python
|
||||
from langchain.chat_models import ChatAnthropic
|
||||
```
|
@@ -1,7 +1,8 @@
|
||||
# Beam
|
||||
|
||||
This page covers how to use Beam within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Beam wrappers.
|
||||
>[Beam](https://docs.beam.cloud/introduction) makes it easy to run code on GPUs, deploy scalable web APIs,
|
||||
> schedule cron jobs, and run massively parallel workloads — without managing any infrastructure.
|
||||
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
@@ -9,19 +10,19 @@ It is broken into two parts: installation and setup, and then references to spec
|
||||
- Install the Beam CLI with `curl https://raw.githubusercontent.com/slai-labs/get-beam/main/get-beam.sh -sSfL | sh`
|
||||
- Register API keys with `beam configure`
|
||||
- Set environment variables (`BEAM_CLIENT_ID`) and (`BEAM_CLIENT_SECRET`)
|
||||
- Install the Beam SDK `pip install beam-sdk`
|
||||
- Install the Beam SDK:
|
||||
```bash
|
||||
pip install beam-sdk
|
||||
```
|
||||
|
||||
## Wrappers
|
||||
## LLM
|
||||
|
||||
### LLM
|
||||
|
||||
There exists a Beam LLM wrapper, which you can access with
|
||||
|
||||
```python
|
||||
from langchain.llms.beam import Beam
|
||||
```
|
||||
|
||||
## Define your Beam app.
|
||||
### Example of the Beam app
|
||||
|
||||
This is the environment you’ll be developing against once you start the app.
|
||||
It's also used to define the maximum response length from the model.
|
||||
@@ -44,7 +45,7 @@ llm = Beam(model_name="gpt2",
|
||||
verbose=False)
|
||||
```
|
||||
|
||||
## Deploy your Beam app
|
||||
### Deploy the Beam app
|
||||
|
||||
Once defined, you can deploy your Beam app by calling your model's `_deploy()` method.
|
||||
|
||||
@@ -52,9 +53,9 @@ Once defined, you can deploy your Beam app by calling your model's `_deploy()` m
|
||||
llm._deploy()
|
||||
```
|
||||
|
||||
## Call your Beam app
|
||||
### Call the Beam app
|
||||
|
||||
Once a beam model is deployed, it can be called by callying your model's `_call()` method.
|
||||
Once a beam model is deployed, it can be called by calling your model's `_call()` method.
|
||||
This returns the GPT2 text response to your prompt.
|
||||
|
||||
```python
|
||||
|
24
docs/integrations/google_vertex_ai.md
Normal file
24
docs/integrations/google_vertex_ai.md
Normal file
@@ -0,0 +1,24 @@
|
||||
# Google Vertex AI
|
||||
|
||||
>[Vertex AI](https://cloud.google.com/vertex-ai/docs/start/introduction-unified-platform) is a machine learning (ML)
|
||||
> platform that lets you train and deploy ML models and AI applications.
|
||||
> `Vertex AI` combines data engineering, data science, and ML engineering workflows, enabling your teams to
|
||||
> collaborate using a common toolset.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
|
||||
```bash
|
||||
pip install google-cloud-aiplatform
|
||||
```
|
||||
|
||||
See the [setup instructions](../modules/models/chat/integrations/google_vertex_ai_palm.ipynb)
|
||||
|
||||
|
||||
## Chat Models
|
||||
|
||||
See a [usage example](../modules/models/chat/integrations/google_vertex_ai_palm.ipynb)
|
||||
|
||||
```python
|
||||
from langchain.chat_models import ChatVertexAI
|
||||
```
|
@@ -47,7 +47,7 @@ To use a the wrapper for a model hosted on Hugging Face Hub:
|
||||
```python
|
||||
from langchain.embeddings import HuggingFaceHubEmbeddings
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/huggingfacehub.ipynb)
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/huggingface_hub.ipynb)
|
||||
|
||||
### Tokenizer
|
||||
|
||||
|
@@ -35,7 +35,6 @@ from langchain.llms import AzureOpenAI
|
||||
For a more detailed walkthrough of the `Azure` wrapper, see [this notebook](../modules/models/llms/integrations/azure_openai_example.ipynb)
|
||||
|
||||
|
||||
|
||||
## Text Embedding Model
|
||||
|
||||
```python
|
||||
@@ -44,6 +43,14 @@ from langchain.embeddings import OpenAIEmbeddings
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/models/text_embedding/examples/openai.ipynb)
|
||||
|
||||
|
||||
## Chat Model
|
||||
|
||||
```python
|
||||
from langchain.chat_models import ChatOpenAI
|
||||
```
|
||||
For a more detailed walkthrough of this, see [this notebook](../modules/models/chat/integrations/openai.ipynb)
|
||||
|
||||
|
||||
## Tokenizer
|
||||
|
||||
There are several places you can use the `tiktoken` tokenizer. By default, it is used to count tokens
|
||||
|
@@ -1,19 +1,23 @@
|
||||
# Prediction Guard
|
||||
|
||||
This page covers how to use the Prediction Guard ecosystem within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific Prediction Guard wrappers.
|
||||
>[Prediction Guard](https://docs.predictionguard.com/) gives a quick and easy access to state-of-the-art open and closed access LLMs, without needing to spend days and weeks figuring out all of the implementation details, managing a bunch of different API specs, and setting up the infrastructure for model deployments.
|
||||
|
||||
|
||||
## Installation and Setup
|
||||
- Install the Python SDK with `pip install predictionguard`
|
||||
- Install the Python SDK:
|
||||
```bash
|
||||
pip install predictionguard
|
||||
```
|
||||
|
||||
- Get an Prediction Guard access token (as described [here](https://docs.predictionguard.com/)) and set it as an environment variable (`PREDICTIONGUARD_TOKEN`)
|
||||
|
||||
## LLM Wrapper
|
||||
## LLM
|
||||
|
||||
There exists a Prediction Guard LLM wrapper, which you can access with
|
||||
```python
|
||||
from langchain.llms import PredictionGuard
|
||||
```
|
||||
|
||||
### Example
|
||||
You can provide the name of the Prediction Guard model as an argument when initializing the LLM:
|
||||
```python
|
||||
pgllm = PredictionGuard(model="MPT-7B-Instruct")
|
||||
@@ -24,14 +28,12 @@ You can also provide your access token directly as an argument:
|
||||
pgllm = PredictionGuard(model="MPT-7B-Instruct", token="<your access token>")
|
||||
```
|
||||
|
||||
Finally, you can provide an "output" argument that is used to structure/ control the output of the LLM:
|
||||
Also, you can provide an "output" argument that is used to structure/ control the output of the LLM:
|
||||
```python
|
||||
pgllm = PredictionGuard(model="MPT-7B-Instruct", output={"type": "boolean"})
|
||||
```
|
||||
|
||||
## Example usage
|
||||
|
||||
Basic usage of the controlled or guarded LLM wrapper:
|
||||
#### Basic usage of the controlled or guarded LLM:
|
||||
```python
|
||||
import os
|
||||
|
||||
@@ -72,7 +74,7 @@ pgllm = PredictionGuard(model="MPT-7B-Instruct",
|
||||
pgllm(prompt.format(query="What kind of post is this?"))
|
||||
```
|
||||
|
||||
Basic LLM Chaining with the Prediction Guard wrapper:
|
||||
#### Basic LLM Chaining with the Prediction Guard:
|
||||
```python
|
||||
import os
|
||||
|
||||
|
@@ -1,31 +1,35 @@
|
||||
# PromptLayer
|
||||
|
||||
This page covers how to use [PromptLayer](https://www.promptlayer.com) within LangChain.
|
||||
It is broken into two parts: installation and setup, and then references to specific PromptLayer wrappers.
|
||||
>[PromptLayer](https://docs.promptlayer.com/what-is-promptlayer/wxpF9EZkUwvdkwvVE9XEvC/how-promptlayer-works/dvgGSxNe6nB1jj8mUVbG8r)
|
||||
> is a devtool that allows you to track, manage, and share your GPT prompt engineering.
|
||||
> It acts as a middleware between your code and OpenAI's python library, recording all your API requests
|
||||
> and saving relevant metadata for easy exploration and search in the [PromptLayer](https://www.promptlayer.com) dashboard.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
If you want to work with PromptLayer:
|
||||
- Install the promptlayer python library `pip install promptlayer`
|
||||
- Install the `promptlayer` python library
|
||||
```bash
|
||||
pip install promptlayer
|
||||
```
|
||||
- Create a PromptLayer account
|
||||
- Create an api token and set it as an environment variable (`PROMPTLAYER_API_KEY`)
|
||||
|
||||
## Wrappers
|
||||
|
||||
### LLM
|
||||
## LLM
|
||||
|
||||
There exists an PromptLayer OpenAI LLM wrapper, which you can access with
|
||||
```python
|
||||
from langchain.llms import PromptLayerOpenAI
|
||||
```
|
||||
|
||||
To tag your requests, use the argument `pl_tags` when instanializing the LLM
|
||||
### Example
|
||||
|
||||
To tag your requests, use the argument `pl_tags` when instantiating the LLM
|
||||
```python
|
||||
from langchain.llms import PromptLayerOpenAI
|
||||
llm = PromptLayerOpenAI(pl_tags=["langchain-requests", "chatbot"])
|
||||
```
|
||||
|
||||
To get the PromptLayer request id, use the argument `return_pl_id` when instanializing the LLM
|
||||
To get the PromptLayer request id, use the argument `return_pl_id` when instantiating the LLM
|
||||
```python
|
||||
from langchain.llms import PromptLayerOpenAI
|
||||
llm = PromptLayerOpenAI(return_pl_id=True)
|
||||
@@ -42,8 +46,14 @@ You can use the PromptLayer request ID to add a prompt, score, or other metadata
|
||||
|
||||
This LLM is identical to the [OpenAI LLM](./openai.md), except that
|
||||
- all your requests will be logged to your PromptLayer account
|
||||
- you can add `pl_tags` when instantializing to tag your requests on PromptLayer
|
||||
- you can add `return_pl_id` when instantializing to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
|
||||
- you can add `pl_tags` when instantiating to tag your requests on PromptLayer
|
||||
- you can add `return_pl_id` when instantiating to return a PromptLayer request id to use [while tracking requests](https://magniv.notion.site/Track-4deee1b1f7a34c1680d085f82567dab9).
|
||||
|
||||
## Chat Model
|
||||
|
||||
```python
|
||||
from langchain.chat_models import PromptLayerChatOpenAI
|
||||
```
|
||||
|
||||
See a [usage example](../modules/models/chat/integrations/promptlayer_chatopenai.ipynb).
|
||||
|
||||
PromptLayer also provides native wrappers for [`PromptLayerChatOpenAI`](../modules/models/chat/integrations/promptlayer_chatopenai.ipynb) and `PromptLayerOpenAIChat`
|
||||
|
22
docs/integrations/tensorflow_hub.md
Normal file
22
docs/integrations/tensorflow_hub.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# Tensorflow Hub
|
||||
|
||||
>[TensorFlow Hub](https://www.tensorflow.org/hub) is a repository of trained machine learning models ready for fine-tuning and deployable anywhere.
|
||||
|
||||
>[TensorFlow Hub](https://tfhub.dev/) lets you search and discover hundreds of trained, ready-to-deploy machine learning models in one place.
|
||||
|
||||
## Installation and Setup
|
||||
|
||||
|
||||
```bash
|
||||
pip install tensorflow-hub
|
||||
pip install tensorflow_text
|
||||
```
|
||||
|
||||
|
||||
## Text Embedding Models
|
||||
|
||||
See a [usage example](../modules/models/text_embedding/examples/tensorflowhub.ipynb)
|
||||
|
||||
```python
|
||||
from langchain.embeddings import TensorflowHubEmbeddings
|
||||
```
|
Reference in New Issue
Block a user