mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2025-09-04 18:11:02 +00:00
typescript: async generator and token stream (#1897)
Signed-off-by: Tare Ebelo <75279482+TareHimself@users.noreply.github.com> Signed-off-by: jacob <jacoobes@sern.dev> Signed-off-by: Jared Van Bortel <jared@nomic.ai> Co-authored-by: jacob <jacoobes@sern.dev> Co-authored-by: Jared Van Bortel <jared@nomic.ai>
This commit is contained in:
@@ -159,6 +159,7 @@ This package is in active development, and breaking changes may happen until the
|
||||
* [mpt](#mpt)
|
||||
* [replit](#replit)
|
||||
* [type](#type)
|
||||
* [TokenCallback](#tokencallback)
|
||||
* [InferenceModel](#inferencemodel)
|
||||
* [dispose](#dispose)
|
||||
* [EmbeddingModel](#embeddingmodel)
|
||||
@@ -184,16 +185,17 @@ This package is in active development, and breaking changes may happen until the
|
||||
* [Parameters](#parameters-5)
|
||||
* [hasGpuDevice](#hasgpudevice)
|
||||
* [listGpu](#listgpu)
|
||||
* [Parameters](#parameters-6)
|
||||
* [dispose](#dispose-2)
|
||||
* [GpuDevice](#gpudevice)
|
||||
* [type](#type-2)
|
||||
* [LoadModelOptions](#loadmodeloptions)
|
||||
* [loadModel](#loadmodel)
|
||||
* [Parameters](#parameters-6)
|
||||
* [createCompletion](#createcompletion)
|
||||
* [Parameters](#parameters-7)
|
||||
* [createEmbedding](#createembedding)
|
||||
* [createCompletion](#createcompletion)
|
||||
* [Parameters](#parameters-8)
|
||||
* [createEmbedding](#createembedding)
|
||||
* [Parameters](#parameters-9)
|
||||
* [CompletionOptions](#completionoptions)
|
||||
* [verbose](#verbose)
|
||||
* [systemPromptTemplate](#systemprompttemplate)
|
||||
@@ -225,15 +227,15 @@ This package is in active development, and breaking changes may happen until the
|
||||
* [repeatPenalty](#repeatpenalty)
|
||||
* [repeatLastN](#repeatlastn)
|
||||
* [contextErase](#contexterase)
|
||||
* [createTokenStream](#createtokenstream)
|
||||
* [Parameters](#parameters-9)
|
||||
* [generateTokens](#generatetokens)
|
||||
* [Parameters](#parameters-10)
|
||||
* [DEFAULT\_DIRECTORY](#default_directory)
|
||||
* [DEFAULT\_LIBRARIES\_DIRECTORY](#default_libraries_directory)
|
||||
* [DEFAULT\_MODEL\_CONFIG](#default_model_config)
|
||||
* [DEFAULT\_PROMPT\_CONTEXT](#default_prompt_context)
|
||||
* [DEFAULT\_MODEL\_LIST\_URL](#default_model_list_url)
|
||||
* [downloadModel](#downloadmodel)
|
||||
* [Parameters](#parameters-10)
|
||||
* [Parameters](#parameters-11)
|
||||
* [Examples](#examples)
|
||||
* [DownloadModelOptions](#downloadmodeloptions)
|
||||
* [modelPath](#modelpath)
|
||||
@@ -279,6 +281,12 @@ Model architecture. This argument currently does not have any functionality and
|
||||
|
||||
Type: ModelType
|
||||
|
||||
#### TokenCallback
|
||||
|
||||
Callback for controlling token generation
|
||||
|
||||
Type: function (tokenId: [number](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number), token: [string](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String), total: [string](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String)): [boolean](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Boolean)
|
||||
|
||||
#### InferenceModel
|
||||
|
||||
InferenceModel represents an LLM which can make chat predictions, similar to GPT transformers.
|
||||
@@ -362,9 +370,9 @@ Use the prompt function exported for a value
|
||||
|
||||
* `q` **[string](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String)** The prompt input.
|
||||
* `params` **Partial<[LLModelPromptContext](#llmodelpromptcontext)>** Optional parameters for the prompt context.
|
||||
* `callback` **function (res: [string](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String)): void** 
|
||||
* `callback` **[TokenCallback](#tokencallback)?** optional callback to control token generation.
|
||||
|
||||
Returns **void** The result of the model prompt.
|
||||
Returns **[Promise](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Promise)<[string](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String)>** The result of the model prompt.
|
||||
|
||||
##### embed
|
||||
|
||||
@@ -424,6 +432,12 @@ Returns **[boolean](https://developer.mozilla.org/docs/Web/JavaScript/Reference/
|
||||
|
||||
GPUs that are usable for this LLModel
|
||||
|
||||
###### Parameters
|
||||
|
||||
* `nCtx` **[number](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number)** Maximum size of context window
|
||||
|
||||
<!---->
|
||||
|
||||
* Throws **any** if hasGpuDevice returns false (i think)
|
||||
|
||||
Returns **[Array](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Array)<[GpuDevice](#gpudevice)>** 
|
||||
@@ -690,17 +704,18 @@ The percentage of context to erase if the context window is exceeded.
|
||||
|
||||
Type: [number](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Number)
|
||||
|
||||
#### createTokenStream
|
||||
#### generateTokens
|
||||
|
||||
TODO: Help wanted to implement this
|
||||
Creates an async generator of tokens
|
||||
|
||||
##### Parameters
|
||||
|
||||
* `llmodel` **[LLModel](#llmodel)** 
|
||||
* `messages` **[Array](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Array)<[PromptMessage](#promptmessage)>** 
|
||||
* `options` **[CompletionOptions](#completionoptions)** 
|
||||
* `llmodel` **[InferenceModel](#inferencemodel)** The language model object.
|
||||
* `messages` **[Array](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/Array)<[PromptMessage](#promptmessage)>** The array of messages for the conversation.
|
||||
* `options` **[CompletionOptions](#completionoptions)** The options for creating the completion.
|
||||
* `callback` **[TokenCallback](#tokencallback)** optional callback to control token generation.
|
||||
|
||||
Returns **function (ll: [LLModel](#llmodel)): AsyncGenerator<[string](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String)>** 
|
||||
Returns **AsyncGenerator<[string](https://developer.mozilla.org/docs/Web/JavaScript/Reference/Global_Objects/String)>** The stream of generated tokens
|
||||
|
||||
#### DEFAULT\_DIRECTORY
|
||||
|
||||
|
Reference in New Issue
Block a user