mirror of
https://github.com/hwchase17/langchain.git
synced 2026-01-24 13:58:11 +00:00
This PR fixes an issue where AgentExecutor with RunnableAgent does not allow users to see individual llm tokens if streaming=True is not set explicitly on the underlying chat model. The majority of this PR is testing code: 1. Create a test chat model that makes it easier to test streaming and supports AIMessages that include function invocation information. 2. Tests for the chat model 3. Tests for RunnableAgent (previously untested) 4. Tests for openai agent (previously untested)