mirror of
https://github.com/hwchase17/langchain.git
synced 2025-05-30 19:49:09 +00:00
update(llms/TGI): Allow None as temperature value (#10212)
Text Generation Inference's client permits the use of a None temperature
as seen
[here](033230ae66/clients/python/text_generation/client.py (L71C9-L71C20)
).
While I haved dived into TGI's server code and don't know about the
implications of using None as a temperature setting, I think we should
grant users the option to pass None as a temperature parameter to TGI.
This commit is contained in:
parent
be152b6a56
commit
8b95dabfe3
@ -65,7 +65,7 @@ class HuggingFaceTextGenInference(LLM):
|
||||
typical_p: Optional[float] = 0.95
|
||||
"""Typical Decoding mass. See [Typical Decoding for Natural Language
|
||||
Generation](https://arxiv.org/abs/2202.00666) for more information."""
|
||||
temperature: float = 0.8
|
||||
temperature: Optional[float] = 0.8
|
||||
"""The value used to module the logits distribution."""
|
||||
repetition_penalty: Optional[float] = None
|
||||
"""The parameter for repetition penalty. 1.0 means no penalty.
|
||||
|
Loading…
Reference in New Issue
Block a user