mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-03 06:08:18 +00:00
**Description:** LLM will stop generating text even in the middle of a sentence if `finish_reason` is `length` (for OpenAI) or `stop_reason` is `max_tokens` (for Anthropic). To obtain longer outputs from LLM, we should call the message generation API multiple times and merge the results into the text to circumvent the API's output token limit. The extra line breaks forced by the `merge_message_runs` function when seamlessly merging messages can be annoying, so I added the option to specify the chunk separator. **Issue:** No corresponding issues. **Dependencies:** No dependencies required. **Twitter handle:** @hanama_chem https://x.com/hanama_chem --------- Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com> Co-authored-by: Bagatur <baskaryan@gmail.com> |
||
---|---|---|
.. | ||
__init__.py | ||
test_ai.py | ||
test_imports.py | ||
test_utils.py |