mirror of
https://github.com/hwchase17/langchain.git
synced 2025-08-23 11:32:10 +00:00
This PR addresses the common issue where users struggle to pass custom parameters to OpenAI-compatible APIs like LM Studio, vLLM, and others. The problem occurs when users try to use `model_kwargs` for custom parameters, which causes API errors. ## Problem Users attempting to pass custom parameters (like LM Studio's `ttl` parameter) were getting errors: ```python # ❌ This approach fails llm = ChatOpenAI( base_url="http://localhost:1234/v1", model="mlx-community/QwQ-32B-4bit", model_kwargs={"ttl": 5} # Causes TypeError: unexpected keyword argument 'ttl' ) ``` ## Solution The `extra_body` parameter is the correct way to pass custom parameters to OpenAI-compatible APIs: ```python # ✅ This approach works correctly llm = ChatOpenAI( base_url="http://localhost:1234/v1", model="mlx-community/QwQ-32B-4bit", extra_body={"ttl": 5} # Custom parameters go in extra_body ) ``` ## Changes Made 1. **Enhanced Documentation**: Updated the `extra_body` parameter docstring with comprehensive examples for LM Studio, vLLM, and other providers 2. **Added Documentation Section**: Created a new "OpenAI-compatible APIs" section in the main class docstring with practical examples 3. **Unit Tests**: Added tests to verify `extra_body` functionality works correctly: - `test_extra_body_parameter()`: Verifies custom parameters are included in request payload - `test_extra_body_with_model_kwargs()`: Ensures `extra_body` and `model_kwargs` work together 4. **Clear Guidance**: Documented when to use `extra_body` vs `model_kwargs` ## Examples Added **LM Studio with TTL (auto-eviction):** ```python ChatOpenAI( base_url="http://localhost:1234/v1", api_key="lm-studio", model="mlx-community/QwQ-32B-4bit", extra_body={"ttl": 300} # Auto-evict after 5 minutes ) ``` **vLLM with custom sampling:** ```python ChatOpenAI( base_url="http://localhost:8000/v1", api_key="EMPTY", model="meta-llama/Llama-2-7b-chat-hf", extra_body={ "use_beam_search": True, "best_of": 4 } ) ``` ## Why This Works - `model_kwargs` parameters are passed directly to the OpenAI client's `create()` method, causing errors for non-standard parameters - `extra_body` parameters are included in the HTTP request body, which is exactly what OpenAI-compatible APIs expect for custom parameters Fixes #32115. <!-- START COPILOT CODING AGENT TIPS --> --- 💬 Share your feedback on Copilot coding agent for the chance to win a $200 gift card! Click [here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to start the survey. --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com> Co-authored-by: Mason Daugherty <github@mdrxy.com> Co-authored-by: Mason Daugherty <mason@langchain.dev>
30 lines
777 B
Markdown
30 lines
777 B
Markdown
# langchain-perplexity
|
|
|
|
This package contains the LangChain integration with Perplexity.
|
|
|
|
## Installation
|
|
|
|
```bash
|
|
pip install -U langchain-perplexity
|
|
```
|
|
|
|
And you should [configure your perplexity credentials](https://docs.perplexity.ai/guides/getting-started)
|
|
and then set the `PPLX_API_KEY` environment variable.
|
|
|
|
## Usage
|
|
|
|
This package contains the `ChatPerplexity` class, which is the recommended way to interface with Perplexity chat models.
|
|
|
|
```python
|
|
import getpass
|
|
import os
|
|
|
|
if not os.environ.get("PPLX_API_KEY"):
|
|
os.environ["PPLX_API_KEY"] = getpass.getpass("Enter API key for Perplexity: ")
|
|
|
|
from langchain.chat_models import init_chat_model
|
|
|
|
llm = init_chat_model("llama-3.1-sonar-small-128k-online", model_provider="perplexity")
|
|
llm.invoke("Hello, world!")
|
|
```
|