Adjusted GPT4All llm to streaming API and added support for GPT4All_J (#4131)

Fix for these issues:
https://github.com/hwchase17/langchain/issues/4126

https://github.com/hwchase17/langchain/issues/3839#issuecomment-1534258559

---------

Co-authored-by: Pawel Faron <ext-pawel.faron@vaisala.com>
This commit is contained in:
PawelFaron
2023-05-07 00:14:09 +02:00
committed by GitHub
parent 075d9631f5
commit 04b74d0446
2 changed files with 54 additions and 17 deletions

View File

@@ -125,7 +125,9 @@
"# Callbacks support token-wise streaming\n",
"callbacks = [StreamingStdOutCallbackHandler()]\n",
"# Verbose is required to pass to the callback manager\n",
"llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)"
"llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)\n",
"# If you want to use GPT4ALL_J model add the backend parameter\n",
"llm = GPT4All(model=local_path, backend='gptj', callbacks=callbacks, verbose=True)"
]
},
{