community: update perplexity docstring (#30451)

This pull request includes extensive documentation updates for the
`ChatPerplexity` class in the
`libs/community/langchain_community/chat_models/perplexity.py` file. The
changes provide detailed setup instructions, key initialization
arguments, and usage examples for various functionalities of the
`ChatPerplexity` class.

Documentation improvements:

* Added setup instructions for installing the `openai` package and
setting the `PPLX_API_KEY` environment variable.
* Documented key initialization arguments for completion parameters and
client parameters, including `model`, `temperature`, `max_tokens`,
`streaming`, `pplx_api_key`, `request_timeout`, and `max_retries`.
* Provided examples for instantiating the `ChatPerplexity` class,
invoking it with messages, using structured output, invoking with
perplexity-specific parameters, streaming responses, and accessing token
usage and response metadata.Thank you for contributing to LangChain!
This commit is contained in:
David Sánchez Sánchez 2025-03-24 20:01:02 +01:00 committed by GitHub
parent 97dec30eea
commit 3ba0d28d8e
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -74,21 +74,92 @@ def _create_usage_metadata(token_usage: dict) -> UsageMetadata:
class ChatPerplexity(BaseChatModel):
"""`Perplexity AI` Chat models API.
Setup:
To use, you should have the ``openai`` python package installed, and the
environment variable ``PPLX_API_KEY`` set to your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Any parameters that are valid to be passed to the openai.create call
can be passed in, even if not explicitly saved on this class.
Example:
.. code-block:: bash
pip install openai
export PPLX_API_KEY=your_api_key
Key init args - completion params:
model: str
Name of the model to use. e.g. "llama-3.1-sonar-small-128k-online"
temperature: float
Sampling temperature to use. Default is 0.7
max_tokens: Optional[int]
Maximum number of tokens to generate.
streaming: bool
Whether to stream the results or not.
Key init args - client params:
pplx_api_key: Optional[str]
API key for PerplexityChat API. Default is None.
request_timeout: Optional[Union[float, Tuple[float, float]]]
Timeout for requests to PerplexityChat completion API. Default is None.
max_retries: int
Maximum number of retries to make when generating.
See full list of supported init args and their descriptions in the params section.
Instantiate:
.. code-block:: python
from langchain_community.chat_models import ChatPerplexity
chat = ChatPerplexity(
llm = ChatPerplexity(
model="llama-3.1-sonar-small-128k-online",
temperature=0.7,
)
"""
Invoke:
.. code-block:: python
messages = [
("system", "You are a chatbot."),
("user", "Hello!")
]
llm.invoke(messages)
Invoke with structured output:
.. code-block:: python
from pydantic import BaseModel
class StructuredOutput(BaseModel):
role: str
content: str
llm.with_structured_output(StructuredOutput)
llm.invoke(messages)
Invoke with perplexity-specific params:
.. code-block:: python
llm.invoke(messages, extra_body={"search_recency_filter": "week"})
Stream:
.. code-block:: python
for chunk in llm.stream(messages):
print(chunk.content)
Token usage:
.. code-block:: python
response = llm.invoke(messages)
response.usage_metadata
Response metadata:
.. code-block:: python
response = llm.invoke(messages)
response.response_metadata
""" # noqa: E501
client: Any = None #: :meta private:
model: str = "llama-3.1-sonar-small-128k-online"