mirror of
https://github.com/hwchase17/langchain.git
synced 2025-07-20 11:31:58 +00:00
Fix output final text for HuggingFaceTextGenInference when streaming (#6211)
The LLM integration [HuggingFaceTextGenInference](https://github.com/hwchase17/langchain/blob/master/langchain/llms/huggingface_text_gen_inference.py) already has streaming support. However, when streaming is enabled, it always returns an empty string as the final output text when the LLM is finished. This is because `text` is instantiated with an empty string and never updated. This PR fixes the collection of the final output text by concatenating new tokens.
This commit is contained in:
parent
b3bccabc66
commit
ea6a5b03e0
@ -169,4 +169,5 @@ class HuggingFaceTextGenInference(LLM):
|
|||||||
if not token.special:
|
if not token.special:
|
||||||
if text_callback:
|
if text_callback:
|
||||||
text_callback(token.text)
|
text_callback(token.text)
|
||||||
|
text += token.text
|
||||||
return text
|
return text
|
||||||
|
Loading…
Reference in New Issue
Block a user