mirror of
https://github.com/hwchase17/langchain.git
synced 2025-09-24 03:52:10 +00:00
fix(core): fix parse_result
in case of self.first_tool_only with multiple keys matching for JsonOutputKeyToolsParser (#32106)
* **Description:** Updated `parse_result` logic to handle cases where `self.first_tool_only` is `True` and multiple matching keys share the same function name. Instead of returning the first match prematurely, the method now prioritizes filtering results by the specified key to ensure correct selection. * **Issue:** #32100 --------- Co-authored-by: Mason Daugherty <github@mdrxy.com> Co-authored-by: Mason Daugherty <mason@langchain.dev>
This commit is contained in:
committed by
GitHub
parent
ddaba21e83
commit
095f4a7c28
@@ -1,24 +1,23 @@
|
||||
"""Output classes.
|
||||
|
||||
**Output** classes are used to represent the output of a language model call
|
||||
and the output of a chat.
|
||||
Used to represent the output of a language model call and the output of a chat.
|
||||
|
||||
The top container for information is the `LLMResult` object. `LLMResult` is used by
|
||||
both chat models and LLMs. This object contains the output of the language
|
||||
model and any additional information that the model provider wants to return.
|
||||
The top container for information is the `LLMResult` object. `LLMResult` is used by both
|
||||
chat models and LLMs. This object contains the output of the language model and any
|
||||
additional information that the model provider wants to return.
|
||||
|
||||
When invoking models via the standard runnable methods (e.g. invoke, batch, etc.):
|
||||
|
||||
- Chat models will return `AIMessage` objects.
|
||||
- LLMs will return regular text strings.
|
||||
|
||||
In addition, users can access the raw output of either LLMs or chat models via
|
||||
callbacks. The on_chat_model_end and on_llm_end callbacks will return an
|
||||
callbacks. The ``on_chat_model_end`` and ``on_llm_end`` callbacks will return an
|
||||
LLMResult object containing the generated outputs and any additional information
|
||||
returned by the model provider.
|
||||
|
||||
In general, if information is already available
|
||||
in the AIMessage object, it is recommended to access it from there rather than
|
||||
from the `LLMResult` object.
|
||||
In general, if information is already available in the AIMessage object, it is
|
||||
recommended to access it from there rather than from the `LLMResult` object.
|
||||
"""
|
||||
|
||||
from typing import TYPE_CHECKING
|
||||
|
@@ -27,7 +27,11 @@ class ChatGeneration(Generation):
|
||||
"""
|
||||
|
||||
text: str = ""
|
||||
"""*SHOULD NOT BE SET DIRECTLY* The text contents of the output message."""
|
||||
"""The text contents of the output message.
|
||||
|
||||
.. warning::
|
||||
SHOULD NOT BE SET DIRECTLY!
|
||||
"""
|
||||
message: BaseMessage
|
||||
"""The message output by the chat model."""
|
||||
# Override type to be ChatGeneration, ignore mypy error as this is intentional
|
||||
|
@@ -11,7 +11,8 @@ from langchain_core.utils._merge import merge_dicts
|
||||
class Generation(Serializable):
|
||||
"""A single text generation output.
|
||||
|
||||
Generation represents the response from an "old-fashioned" LLM that
|
||||
Generation represents the response from an
|
||||
`"old-fashioned" LLM <https://python.langchain.com/docs/concepts/text_llms/>__` that
|
||||
generates regular text (not chat messages).
|
||||
|
||||
This model is used internally by chat model and will eventually
|
||||
|
@@ -15,9 +15,9 @@ from langchain_core.outputs.run_info import RunInfo
|
||||
class LLMResult(BaseModel):
|
||||
"""A container for results of an LLM call.
|
||||
|
||||
Both chat models and LLMs generate an LLMResult object. This object contains
|
||||
the generated outputs and any additional information that the model provider
|
||||
wants to return.
|
||||
Both chat models and LLMs generate an LLMResult object. This object contains the
|
||||
generated outputs and any additional information that the model provider wants to
|
||||
return.
|
||||
"""
|
||||
|
||||
generations: list[
|
||||
@@ -25,17 +25,16 @@ class LLMResult(BaseModel):
|
||||
]
|
||||
"""Generated outputs.
|
||||
|
||||
The first dimension of the list represents completions for different input
|
||||
prompts.
|
||||
The first dimension of the list represents completions for different input prompts.
|
||||
|
||||
The second dimension of the list represents different candidate generations
|
||||
for a given prompt.
|
||||
The second dimension of the list represents different candidate generations for a
|
||||
given prompt.
|
||||
|
||||
When returned from an LLM the type is list[list[Generation]].
|
||||
When returned from a chat model the type is list[list[ChatGeneration]].
|
||||
- When returned from **an LLM**, the type is ``list[list[Generation]]``.
|
||||
- When returned from a **chat model**, the type is ``list[list[ChatGeneration]]``.
|
||||
|
||||
ChatGeneration is a subclass of Generation that has a field for a structured
|
||||
chat message.
|
||||
ChatGeneration is a subclass of Generation that has a field for a structured chat
|
||||
message.
|
||||
"""
|
||||
llm_output: Optional[dict] = None
|
||||
"""For arbitrary LLM provider specific output.
|
||||
@@ -43,9 +42,8 @@ class LLMResult(BaseModel):
|
||||
This dictionary is a free-form dictionary that can contain any information that the
|
||||
provider wants to return. It is not standardized and is provider-specific.
|
||||
|
||||
Users should generally avoid relying on this field and instead rely on
|
||||
accessing relevant information from standardized fields present in
|
||||
AIMessage.
|
||||
Users should generally avoid relying on this field and instead rely on accessing
|
||||
relevant information from standardized fields present in AIMessage.
|
||||
"""
|
||||
run: Optional[list[RunInfo]] = None
|
||||
"""List of metadata info for model call for each input."""
|
||||
|
Reference in New Issue
Block a user