Langchain: json_chat don't need stop sequenes (#16335)

This is a PR about #16334
The Stop sequenes isn't meanful in `json_chat` because it depends json
to work, not completions
<!-- Thank you for contributing to LangChain!

Please title your PR "<package>: <description>", where <package> is
whichever of langchain, community, core, experimental, etc. is being
modified.

Replace this entire comment with:
  - **Description:** a description of the change, 
  - **Issue:** the issue # it fixes if applicable,
  - **Dependencies:** any dependencies required for this change,
- **Twitter handle:** we announce bigger features on Twitter. If your PR
gets announced, and you'd like a mention, we'll gladly shout you out!

Please make sure your PR is passing linting and testing before
submitting. Run `make format`, `make lint` and `make test` from the root
of the package you've modified to check this locally.

See contribution guidelines for more information on how to write/run
tests, lint, etc: https://python.langchain.com/docs/contributing/

If you're adding a new integration, please include:
1. a test for the integration, preferably unit tests that do not rely on
network access,
2. an example notebook showing its use. It lives in
`docs/docs/integrations` directory.

If no one reviews your PR within a few days, please @-mention one of
@baskaryan, @eyurtsev, @hwchase17.
 -->

---------

Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
This commit is contained in:
calvinweb 2024-02-06 06:18:16 +08:00 committed by GitHub
parent 66e45e8ab7
commit dcf973c22c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

View File

@ -12,7 +12,10 @@ from langchain.tools.render import render_text_description
def create_json_chat_agent(
llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: ChatPromptTemplate
llm: BaseLanguageModel,
tools: Sequence[BaseTool],
prompt: ChatPromptTemplate,
stop_sequence: bool = True,
) -> Runnable:
"""Create an agent that uses JSON to format its logic, build for Chat Models.
@ -20,7 +23,9 @@ def create_json_chat_agent(
llm: LLM to use as the agent.
tools: Tools this agent has access to.
prompt: The prompt to use. See Prompt section below for more.
stop_sequence: Adds a stop token of "Observation:" to avoid hallucinates.
Default is True. You may to set this to False if the LLM you are using
does not support stop sequences.
Returns:
A Runnable sequence representing an agent. It takes as input all the same input
variables as the prompt passed in does. It returns as output either an
@ -148,7 +153,10 @@ def create_json_chat_agent(
tools=render_text_description(list(tools)),
tool_names=", ".join([t.name for t in tools]),
)
llm_with_stop = llm.bind(stop=["\nObservation"])
if stop_sequence:
llm_to_use = llm.bind(stop=["\nObservation"])
else:
llm_to_use = llm
agent = (
RunnablePassthrough.assign(
@ -157,7 +165,7 @@ def create_json_chat_agent(
)
)
| prompt
| llm_with_stop
| llm_to_use
| JSONAgentOutputParser()
)
return agent