Small bug fixes (#23353)

Small bug fixes according to your comments

---------

Signed-off-by: Joffref <mariusjoffre@gmail.com>
Signed-off-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: Eugene Yurtsev <eyurtsev@gmail.com>
Co-authored-by: Baskar Gopinath <73015364+baskargopinath@users.noreply.github.com>
Co-authored-by: Chester Curme <chester.curme@gmail.com>
Co-authored-by: Mathis Joffre <51022808+Joffref@users.noreply.github.com>
Co-authored-by: Baur <baur.krykpayev@gmail.com>
Co-authored-by: Nuradil <nuradil.maksut@icloud.com>
Co-authored-by: Nuradil <133880216+yaksh0nti@users.noreply.github.com>
Co-authored-by: Jacob Lee <jacoblee93@gmail.com>
Co-authored-by: Rave Harpaz <rave.harpaz@oracle.com>
Co-authored-by: RHARPAZ <RHARPAZ@RHARPAZ-5750.us.oracle.com>
Co-authored-by: Arthur Cheng <arthur.cheng@oracle.com>
Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com>
Co-authored-by: RUO <61719257+comsa33@users.noreply.github.com>
Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
Co-authored-by: Luis Rueda <userlerueda@gmail.com>
Co-authored-by: Jib <Jibzade@gmail.com>
Co-authored-by: Eugene Yurtsev <eugene@langchain.dev>
Co-authored-by: S M Zia Ur Rashid <smziaurrashid@gmail.com>
Co-authored-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
Co-authored-by: yuncliu <lyc1990@qq.com>
Co-authored-by: wenngong <76683249+wenngong@users.noreply.github.com>
Co-authored-by: gongwn1 <gongwn1@lenovo.com>
Co-authored-by: Mirna Wong <89008547+mirnawong1@users.noreply.github.com>
Co-authored-by: Rahul Triptahi <rahul.psit.ec@gmail.com>
Co-authored-by: Rahul Tripathi <rauhl.psit.ec@gmail.com>
Co-authored-by: maang-h <55082429+maang-h@users.noreply.github.com>
Co-authored-by: asafg <asafg@ai21.com>
Co-authored-by: Asaf Joseph Gardin <39553475+Josephasafg@users.noreply.github.com>
This commit is contained in:
joshc-ai21 2024-06-27 20:58:22 +03:00 committed by GitHub
parent 9308bf32e5
commit 16a293cc3a
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 62 additions and 30 deletions

View File

@ -18,7 +18,9 @@
"# ChatAI21\n",
"\n",
"This notebook covers how to get started with AI21 chat models.\n",
"\n",
"Note that different chat models support different parameters. See the ",
"[AI21 documentation](https://docs.ai21.com/reference) to learn more about the parameters in your chosen model.\n",
"[See all AI21's LangChain components.](https://pypi.org/project/langchain-ai21/) \n",
"## Installation"
]
},
@ -44,7 +46,8 @@
"source": [
"## Environment Setup\n",
"\n",
"We'll need to get a [AI21 API key](https://docs.ai21.com/) and set the `AI21_API_KEY` environment variable:\n"
"We'll need to get an [AI21 API key](https://docs.ai21.com/) and set the ",
"`AI21_API_KEY` environment variable:\n"
]
},
{

View File

@ -17,7 +17,9 @@
"source": [
"# AI21LLM\n",
"\n",
"This example goes over how to use LangChain to interact with `AI21` models.\n",
"This example goes over how to use LangChain to interact with `AI21` Jurassic models. To use the Jamba model, use the [ChatAI21 object](https://python.langchain.com/v0.2/docs/integrations/chat/ai21/) instead.\n",
"\n",
"[See a full list of AI21 models and tools on LangChain.](https://pypi.org/project/langchain-ai21/)\n",
"\n",
"## Installation"
]

View File

@ -1,6 +1,6 @@
# langchain-ai21
This package contains the LangChain integrations for [AI21](https://docs.ai21.com/) through their [AI21](https://pypi.org/project/ai21/) SDK.
This package contains the LangChain integrations for [AI21](https://docs.ai21.com/) models and tools.
## Installation and Setup
@ -13,9 +13,10 @@ pip install langchain-ai21
## Chat Models
This package contains the `ChatAI21` class, which is the recommended way to interface with AI21 Chat models.
This package contains the `ChatAI21` class, which is the recommended way to interface with AI21 chat models, including Jamba-Instruct
and any Jurassic chat models.
To use, install the requirements, and configure your environment.
To use, install the requirements and configure your environment.
```bash
export AI21_API_KEY=your-api-key
@ -27,7 +28,7 @@ Then initialize
from langchain_core.messages import HumanMessage
from langchain_ai21.chat_models import ChatAI21
chat = ChatAI21(model="jamab-instruct")
chat = ChatAI21(model="jamba-instruct-preview")
messages = [HumanMessage(content="Hello from AI21")]
chat.invoke(messages)
```
@ -35,10 +36,12 @@ chat.invoke(messages)
For a list of the supported models, see [this page](https://docs.ai21.com/reference/python-sdk#chat)
## LLMs
You can use AI21's generative AI models as Langchain LLMs:
You can use AI21's Jurassic generative AI models as LangChain LLMs.
To use the newer Jamba model, use the [ChatAI21 chat model](#chat-models), which
supports single-turn instruction/question answering capabilities.
```python
from langchain.prompts import PromptTemplate
from langchain_core.prompts import PromptTemplate
from langchain_ai21 import AI21LLM
llm = AI21LLM(model="j2-ultra")
@ -56,7 +59,7 @@ print(chain.invoke({"question": question}))
## Embeddings
You can use AI21's embeddings models as:
You can use AI21's [embeddings model](https://docs.ai21.com/reference/embeddings-ref) as shown here:
### Query
@ -76,12 +79,12 @@ embeddings = AI21Embeddings()
embeddings.embed_documents(["Hello! This is document 1", "And this is document 2!"])
```
## Task Specific Models
## Task-Specific Models
### Contextual Answers
You can use AI21's contextual answers model to receives text or document, serving as a context,
and a question and returns an answer based entirely on this context.
You can use AI21's [contextual answers model](https://docs.ai21.com/reference/contextual-answers-ref) to parse
given text and answer a question based entirely on the provided information.
This means that if the answer to your question is not in the document,
the model will indicate it (instead of providing a false answer)
@ -91,7 +94,7 @@ from langchain_ai21 import AI21ContextualAnswers
tsm = AI21ContextualAnswers()
response = tsm.invoke(input={"context": "Your context", "question": "Your question"})
response = tsm.invoke(input={"context": "Lots of information here", "question": "Your question about the context"})
```
You can also use it with chains and output parsers and vector DBs:
```python
@ -110,8 +113,8 @@ response = chain.invoke(
### Semantic Text Splitter
You can use AI21's semantic text splitter to split a text into segments.
Instead of merely using punctuation and newlines to divide the text, it identifies distinct topics that will work well together and will form a coherent piece of text.
You can use AI21's semantic [text segmentation model](https://docs.ai21.com/reference/text-segmentation-ref) to split a text into segments by topic.
Text is split at each point where the topic changes.
For a list for examples, see [this page](https://github.com/langchain-ai/langchain/blob/master/docs/docs/modules/data_connection/document_transformers/semantic_text_splitter.ipynb).

View File

@ -19,7 +19,10 @@ from langchain_ai21.chat.chat_factory import create_chat_adapter
class ChatAI21(BaseChatModel, AI21Base):
"""ChatAI21 chat model.
"""ChatAI21 chat model. Different model types support different parameters and
different parameter values. Please read the [AI21 reference documentation]
(https://docs.ai21.com/reference) for your model to understand which parameters
are available.
Example:
.. code-block:: python
@ -27,7 +30,10 @@ class ChatAI21(BaseChatModel, AI21Base):
from langchain_ai21 import ChatAI21
model = ChatAI21()
model = ChatAI21(
# defaults to os.environ.get("AI21_API_KEY")
api_key="my_api_key"
)
"""
model: str
@ -42,7 +48,8 @@ class ChatAI21(BaseChatModel, AI21Base):
"""The maximum number of tokens to generate for each response."""
min_tokens: int = 0
"""The minimum number of tokens to generate for each response."""
"""The minimum number of tokens to generate for each response.
_Not supported for all models._"""
temperature: float = 0.7
"""A value controlling the "creativity" of the model's responses."""
@ -51,17 +58,20 @@ class ChatAI21(BaseChatModel, AI21Base):
"""A value controlling the diversity of the model's responses."""
top_k_return: int = 0
"""The number of top-scoring tokens to consider for each generation step."""
"""The number of top-scoring tokens to consider for each generation step.
_Not supported for all models._"""
frequency_penalty: Optional[Any] = None
"""A penalty applied to tokens that are frequently generated."""
"""A penalty applied to tokens that are frequently generated.
_Not supported for all models._"""
presence_penalty: Optional[Any] = None
""" A penalty applied to tokens that are already present in the prompt."""
""" A penalty applied to tokens that are already present in the prompt.
_Not supported for all models._"""
count_penalty: Optional[Any] = None
"""A penalty applied to tokens based on their frequency
in the generated responses."""
in the generated responses. _Not supported for all models._"""
n: int = 1
"""Number of chat completions to generate for each prompt."""

View File

@ -19,14 +19,24 @@ from langchain_ai21.ai21_base import AI21Base
class AI21LLM(BaseLLM, AI21Base):
"""AI21 large language models.
"""AI21 large language models. Different model types support different parameters
and different parameter values. Please read the [AI21 reference documentation]
(https://docs.ai21.com/reference) for your model to understand which parameters
are available.
AI21LLM supports only the older Jurassic models.
We recommend using ChatAI21 with the newest models, for better results and more
features.
Example:
.. code-block:: python
from langchain_ai21 import AI21LLM
model = AI21LLM()
model = AI21LLM(
# defaults to os.environ.get("AI21_API_KEY")
api_key="my_api_key"
)
"""
model: str
@ -40,7 +50,8 @@ class AI21LLM(BaseLLM, AI21Base):
"""The maximum number of tokens to generate for each response."""
min_tokens: int = 0
"""The minimum number of tokens to generate for each response."""
"""The minimum number of tokens to generate for each response.
_Not supported for all models._"""
temperature: float = 0.7
"""A value controlling the "creativity" of the model's responses."""
@ -49,17 +60,20 @@ class AI21LLM(BaseLLM, AI21Base):
"""A value controlling the diversity of the model's responses."""
top_k_return: int = 0
"""The number of top-scoring tokens to consider for each generation step."""
"""The number of top-scoring tokens to consider for each generation step.
_Not supported for all models._"""
frequency_penalty: Optional[Any] = None
"""A penalty applied to tokens that are frequently generated."""
"""A penalty applied to tokens that are frequently generated.
_Not supported for all models._"""
presence_penalty: Optional[Any] = None
""" A penalty applied to tokens that are already present in the prompt."""
""" A penalty applied to tokens that are already present in the prompt.
_Not supported for all models._"""
count_penalty: Optional[Any] = None
"""A penalty applied to tokens based on their frequency
in the generated responses."""
in the generated responses. _Not supported for all models._"""
custom_model: Optional[str] = None
epoch: Optional[int] = None