docs: integrations reference updates 6 (#25188)

Added missed provider pages. Added missed references to the integration
components.
This commit is contained in:
Leonid Ganeline 2024-08-29 09:17:41 -07:00 committed by GitHub
parent a8af396a82
commit 08c9c683a7
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
8 changed files with 162 additions and 36 deletions

View File

@ -0,0 +1,18 @@
# Dappier AI
> [Dappier](https://platform.dappier.com/) is a platform enabling access to diverse,
> real-time data models. Enhance your AI applications with `Dappiers` pre-trained,
> LLM-ready data models and ensure accurate, current responses with reduced inaccuracies.
## Installation and Setup
To use one of the `Dappier AI` Data Models, you will need an API key. Visit
[Dappier Platform](https://platform.dappier.com/) to log in and create an API key in your profile.
## Chat models
See a [usage example](/docs/integrations/chat/dappier).
```python
from langchain_community.chat_models import ChatDappierAI
```

View File

@ -0,0 +1,17 @@
# Everly AI
> [Everly AI](https://everlyai.xyz/) allows you to run your ML models at scale in the cloud.
> It also provides API access to [several LLM models](https://everlyai.xyz/).
## Installation and Setup
To use `Everly AI`, you will need an API key. Visit
[Everly AI](https://everlyai.xyz/) to create an API key in your profile.
## Chat models
See a [usage example](/docs/integrations/chat/everlyai).
```python
from langchain_community.chat_models import ChatEverlyAI
```

View File

@ -1,7 +1,9 @@
# Fireworks
# Fireworks AI
>[Fireworks AI](https://fireworks.ai) is a generative AI inference platform to run and
> customize models with industry-leading speed and production-readiness.
This page covers how to use [Fireworks](https://fireworks.ai/) models within
Langchain.
## Installation and setup
@ -14,7 +16,7 @@ Langchain.
- Get a Fireworks API key by signing up at [fireworks.ai](https://fireworks.ai).
- Authenticate by setting the FIREWORKS_API_KEY environment variable.
## Authentication
### Authentication
There are two ways to authenticate using your Fireworks API key:
@ -29,20 +31,26 @@ There are two ways to authenticate using your Fireworks API key:
```python
llm = Fireworks(api_key="<KEY>")
```
## Chat models
## Using the Fireworks LLM module
See a [usage example](/docs/integrations/chat/fireworks).
Fireworks integrates with Langchain through the LLM module. In this example, we
will work the mixtral-8x7b-instruct model.
```python
from langchain_fireworks import ChatFireworks
```
## LLMs
See a [usage example](/docs/integrations/llms/fireworks).
```python
from langchain_fireworks import Fireworks
llm = Fireworks(
api_key="<KEY>",
model="accounts/fireworks/models/mixtral-8x7b-instruct",
max_tokens=256)
llm("Name 3 sports.")
```
For a more detailed walkthrough, see [here](/docs/integrations/llms/Fireworks).
## Embedding models
See a [usage example](/docs/integrations/text_embedding/fireworks).
```python
from langchain_fireworks import FireworksEmbeddings
```

View File

@ -1,16 +1,19 @@
# ForefrontAI
# Forefront AI
> [Forefront AI](https://forefront.ai/) is a platform enabling you to
> fine-tune and inference open-source text generation models
This page covers how to use the ForefrontAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific ForefrontAI wrappers.
## Installation and Setup
- Get an ForefrontAI api key and set it as an environment variable (`FOREFRONTAI_API_KEY`)
## Wrappers
Get an `ForefrontAI` API key
visiting [this page](https://accounts.forefront.ai/sign-in?redirect_url=https%3A%2F%2Fforefront.ai%2Fapp%2Fapi-keys).
and set it as an environment variable (`FOREFRONTAI_API_KEY`).
### LLM
## LLM
See a [usage example](/docs/integrations/llms/forefrontai).
There exists an ForefrontAI LLM wrapper, which you can access with
```python
from langchain_community.llms import ForefrontAI
```

View File

@ -0,0 +1,31 @@
# Friendli AI
>[FriendliAI](https://friendli.ai/) enhances AI application performance and optimizes
> cost savings with scalable, efficient deployment options, tailored for high-demand AI workloads.
## Installation and setup
Install the `friendli-client` python package.
```bash
pip install friendli-client
```
Sign in to [Friendli Suite](https://suite.friendli.ai/) to create a Personal Access Token,
and set it as the `FRIENDLI_TOKEN` environment variable.
## Chat models
See a [usage example](/docs/integrations/chat/friendli).
```python
from langchain_community.chat_models.friendli import ChatFriendli
```
## LLMs
See a [usage example](/docs/integrations/llms/friendli).
```python
from langchain_community.llms.friendli import Friendli
```

View File

@ -1,9 +1,13 @@
# GooseAI
This page covers how to use the GooseAI ecosystem within LangChain.
It is broken into two parts: installation and setup, and then references to specific GooseAI wrappers.
>[GooseAI](https://goose.ai) makes deploying NLP services easier and more accessible.
> `GooseAI` is a fully managed inference service delivered via API.
> With feature parity to other well known APIs, `GooseAI` delivers a plug-and-play solution
> for serving open source language models at the industry's best economics by simply
> changing 2 lines in your code.
## Installation and Setup
- Install the Python SDK with `pip install openai`
- Get your GooseAI api key from this link [here](https://goose.ai/).
- Set the environment variable (`GOOSEAI_API_KEY`).
@ -13,11 +17,11 @@ import os
os.environ["GOOSEAI_API_KEY"] = "YOUR_API_KEY"
```
## Wrappers
### LLM
## LLMs
See a [usage example](/docs/integrations/llms/gooseai).
There exists an GooseAI LLM wrapper, which you can access with:
```python
from langchain_community.llms import GooseAI
```

View File

@ -1,17 +1,20 @@
# Groq
Welcome to Groq! 🚀 At Groq, we've developed the world's first Language Processing Unit™, or LPU. The Groq LPU has a deterministic, single core streaming architecture that sets the standard for GenAI inference speed with predictable and repeatable performance for any given workload.
Beyond the architecture, our software is designed to empower developers like you with the tools you need to create innovative, powerful AI applications. With Groq as your engine, you can:
* Achieve uncompromised low latency and performance for real-time AI and HPC inferences 🔥
* Know the exact performance and compute time for any given workload 🔮
* Take advantage of our cutting-edge technology to stay ahead of the competition 💪
Want more Groq? Check out our [website](https://groq.com) for more resources and join our [Discord community](https://discord.gg/JvNsBDKeCG) to connect with our developers!
>[Groq](https://groq.com)developed the world's first Language Processing Unit™, or `LPU`.
> The `Groq LPU` has a deterministic, single core streaming architecture that sets the standard
> for GenAI inference speed with predictable and repeatable performance for any given workload.
>
>Beyond the architecture, `Groq` software is designed to empower developers like you with
> the tools you need to create innovative, powerful AI applications.
>
>With Groq as your engine, you can:
>* Achieve uncompromised low latency and performance for real-time AI and HPC inferences 🔥
>* Know the exact performance and compute time for any given workload 🔮
>* Take advantage of our cutting-edge technology to stay ahead of the competition 💪
## Installation and Setup
Install the integration package:
```bash
@ -24,5 +27,10 @@ Request an [API key](https://wow.groq.com) and set it as an environment variable
export GROQ_API_KEY=gsk_...
```
## Chat Model
## Chat models
See a [usage example](/docs/integrations/chat/groq).
```python
from langchain_groq import ChatGroq
```

View File

@ -0,0 +1,37 @@
# LiteLLM
>[LiteLLM](https://docs.litellm.ai/docs/) is a library that simplifies calling Anthropic,
> Azure, Huggingface, Replicate, etc. LLMs in a unified way.
>
>You can use `LiteLLM` through either:
>
>* [LiteLLM Proxy Server](https://docs.litellm.ai/docs/#openai-proxy) - Server to call 100+ LLMs, load balance, cost tracking across projects
>* [LiteLLM python SDK](https://docs.litellm.ai/docs/#basic-usage) - Python Client to call 100+ LLMs, load balance, cost tracking
## Installation and setup
Install the `litellm` python package.
```bash
pip install litellm
```
## Chat models
### ChatLiteLLM
See a [usage example](/docs/integrations/chat/litellm).
```python
from langchain_community.chat_models import ChatLiteLLM
```
### ChatLiteLLMRouter
You also can use the `ChatLiteLLMRouter` to route requests to different LLMs or LLM providers.
See a [usage example](/docs/integrations/chat/litellm_router).
```python
from langchain_community.chat_models import ChatLiteLLMRouter
```