Ensures proper reStructuredText formatting by adding the required blank
line before closing docstring quotes, which resolves the "Block quote
ends without a blank line; unexpected unindent" warning.
This PR addresses the common issue where users struggle to pass custom
parameters to OpenAI-compatible APIs like LM Studio, vLLM, and others.
The problem occurs when users try to use `model_kwargs` for custom
parameters, which causes API errors.
## Problem
Users attempting to pass custom parameters (like LM Studio's `ttl`
parameter) were getting errors:
```python
# ❌ This approach fails
llm = ChatOpenAI(
base_url="http://localhost:1234/v1",
model="mlx-community/QwQ-32B-4bit",
model_kwargs={"ttl": 5} # Causes TypeError: unexpected keyword argument 'ttl'
)
```
## Solution
The `extra_body` parameter is the correct way to pass custom parameters
to OpenAI-compatible APIs:
```python
# ✅ This approach works correctly
llm = ChatOpenAI(
base_url="http://localhost:1234/v1",
model="mlx-community/QwQ-32B-4bit",
extra_body={"ttl": 5} # Custom parameters go in extra_body
)
```
## Changes Made
1. **Enhanced Documentation**: Updated the `extra_body` parameter
docstring with comprehensive examples for LM Studio, vLLM, and other
providers
2. **Added Documentation Section**: Created a new "OpenAI-compatible
APIs" section in the main class docstring with practical examples
3. **Unit Tests**: Added tests to verify `extra_body` functionality
works correctly:
- `test_extra_body_parameter()`: Verifies custom parameters are included
in request payload
- `test_extra_body_with_model_kwargs()`: Ensures `extra_body` and
`model_kwargs` work together
4. **Clear Guidance**: Documented when to use `extra_body` vs
`model_kwargs`
## Examples Added
**LM Studio with TTL (auto-eviction):**
```python
ChatOpenAI(
base_url="http://localhost:1234/v1",
api_key="lm-studio",
model="mlx-community/QwQ-32B-4bit",
extra_body={"ttl": 300} # Auto-evict after 5 minutes
)
```
**vLLM with custom sampling:**
```python
ChatOpenAI(
base_url="http://localhost:8000/v1",
api_key="EMPTY",
model="meta-llama/Llama-2-7b-chat-hf",
extra_body={
"use_beam_search": True,
"best_of": 4
}
)
```
## Why This Works
- `model_kwargs` parameters are passed directly to the OpenAI client's
`create()` method, causing errors for non-standard parameters
- `extra_body` parameters are included in the HTTP request body, which
is exactly what OpenAI-compatible APIs expect for custom parameters
Fixes#32115.
<!-- START COPILOT CODING AGENT TIPS -->
---
💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey.alchemer.com/s3/8343779/Copilot-Coding-agent) to
start the survey.
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: mdrxy <61371264+mdrxy@users.noreply.github.com>
Co-authored-by: Mason Daugherty <github@mdrxy.com>
Co-authored-by: Mason Daugherty <mason@langchain.dev>
OpenAI changed their API to require the `partial_images` parameter when
using image generation + streaming.
As described in https://github.com/langchain-ai/langchain/pull/31424, we
are ignoring partial images. Here, we accept the `partial_images`
parameter (as required by OpenAI), but emit a warning and continue to
ignore partial images.
Does not support partial images during generation at the moment. Before
doing that I'd like to figure out how to specify the aggregation logic
without requiring changes in core.
---------
Co-authored-by: Chester Curme <chester.curme@gmail.com>