langchain/libs/core/tests/unit_tests/messages
CastaChick 7d13a2f958
core[patch]: add option to specify the chunk separator in merge_message_runs (#24783)
**Description:**
LLM will stop generating text even in the middle of a sentence if
`finish_reason` is `length` (for OpenAI) or `stop_reason` is
`max_tokens` (for Anthropic).
To obtain longer outputs from LLM, we should call the message generation
API multiple times and merge the results into the text to circumvent the
API's output token limit.
The extra line breaks forced by the `merge_message_runs` function when
seamlessly merging messages can be annoying, so I added the option to
specify the chunk separator.

**Issue:**
No corresponding issues.

**Dependencies:**
No dependencies required.

**Twitter handle:**
@hanama_chem
https://x.com/hanama_chem

---------

Co-authored-by: Bagatur <22008038+baskaryan@users.noreply.github.com>
Co-authored-by: Bagatur <baskaryan@gmail.com>
2024-08-22 19:46:25 +00:00
..
__init__.py BUG: more core fixes (#13665) 2023-11-21 15:15:48 -08:00
test_ai.py core[patch]: fix ToolCall "type" when streaming (#24218) 2024-07-13 08:59:03 -07:00
test_imports.py core: add RemoveMessage (#23636) 2024-06-28 14:40:02 -07:00
test_utils.py core[patch]: add option to specify the chunk separator in merge_message_runs (#24783) 2024-08-22 19:46:25 +00:00