Commit Graph

83 Commits

Author SHA1 Message Date
Jared Van Bortel
d4ce9f4a7c llmodel_c: improve quality of error messages (#1625) 2023-11-07 11:20:14 -05:00
cebtenzzre
e90263c23f make scripts executable (#1555) 2023-10-24 09:28:21 -04:00
cebtenzzre
7e5e84fbb7 python: change default extension to .gguf (#1559) 2023-10-23 22:18:50 -04:00
cebtenzzre
37b007603a bindings: replace references to GGMLv3 models with GGUF (#1547) 2023-10-22 11:58:28 -04:00
cebtenzzre
245c5ce5ea update default model URLs (#1538) 2023-10-19 15:25:37 -04:00
cebtenzzre
5fbeeb1cb4 python: connection resume and MSVC support (#1535) 2023-10-19 12:06:38 -04:00
cebtenzzre
4d4275d1b8 python: replace deprecated pkg_resources with importlib (#1505) 2023-10-12 13:35:27 -04:00
Aaron Miller
f39df0906e fix embed4all filename
https://discordapp.com/channels/1076964370942267462/1093558720690143283/1161778216462192692

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
2023-10-12 07:52:56 -04:00
cebtenzzre
aed2068342 python: always check status code of HTTP responses (#1502) 2023-10-11 18:11:28 -04:00
Aaron Miller
afaa291eab python bindings should be quiet by default
* disable llama.cpp logging unless GPT4ALL_VERBOSE_LLAMACPP envvar is
  nonempty
* make verbose flag for retrieve_model default false (but also be
  overridable via gpt4all constructor)

should be able to run a basic test:

```python
import gpt4all
model = gpt4all.GPT4All('/Users/aaron/Downloads/rift-coder-v0-7b-q4_0.gguf')
print(model.generate('def fib(n):'))
```

and see no non-model output when successful
2023-10-11 14:14:36 -07:00
cebtenzzre
f81b4b45bf python: support Path in GPT4All.__init__ (#1462) 2023-10-11 14:12:40 -04:00
Aaron Miller
a10f3aea5e python/embed4all: use gguf model, allow passing kwargs/overriding model 2023-10-05 18:16:19 -04:00
Adam Treat
ea66669cef Switch to new models2.json for new gguf release and bump our version to
2.5.0.
2023-10-05 18:16:19 -04:00
Cebtenzzre
40c78d2f78 python binding: print debug message to stderr 2023-10-05 18:16:19 -04:00
Cebtenzzre
4392bf26e0 pyllmodel: print specific error message 2023-10-05 18:16:19 -04:00
Cebtenzzre
34f2ec2b33 gpt4all.py: GGUF 2023-10-05 18:16:19 -04:00
Adam Treat
987546c63b Nomic vulkan backend licensed under the Software for Open Models License (SOM), version 1.0. 2023-08-31 15:29:54 -04:00
Cosmic Snow
e285ce91da black & isort
Please enter the commit message for your changes. Lines starting
2023-07-31 01:34:06 +02:00
385olt
3ed6d176a5 Python bindings: unicode decoding (#1281)
* rewrote the unicode decoding using the structure of multi-byte unicode symbols.
2023-07-30 11:29:51 -07:00
cosmic-snow
6431d46776 Fix models not getting downloaded in Python bindings (#1262)
- custom callbacks & session improvements PR (v1.0.6) had one too many checks
- remove the problematic config['url'] check
- add a crude test
- fixes #1261
2023-07-24 12:57:06 -04:00
385olt
b4dbbd1485 Python bindings: Custom callbacks, chat session improvement, refactoring (#1145)
* Added the following features: \n 1) Now prompt_model uses the positional argument callback to return the response tokens. \n 2) Due to the callback argument of prompt_model, prompt_model_streaming only manages the queue and threading now, which reduces duplication of the code. \n 3) Added optional verbose argument to prompt_model which prints out the prompt that is passed to the model. \n 4) Chat sessions can now have a header, i.e. an instruction before the transcript of the conversation. The header is set at the creation of the chat session context. \n 5) generate function now accepts an optional callback. \n 6) When streaming and using chat session, the user doesn't need to save assistant's messages by himself. This is done automatically.

* added _empty_response_callback so I don't have to check if callback is None

* added docs

* now if the callback stop generation, the last token is ignored

* fixed type hints, reimplemented chat session header as a system prompt, minor refactoring, docs: removed section about manual update of chat session for streaming

* forgot to add some type hints!

* keep the config of the model in GPT4All class which is taken from models.json if the download is allowed

* During chat sessions, the model-specific systemPrompt and promptTemplate are applied.

* implemented the changes

* Fixed typing. Now the user can set a prompt template that will be applied even outside of a chat session. The template can also have multiple placeholders that can be filled by passing a dictionary to the generate function

* reversed some changes concerning the prompt templates and their functionality

* fixed some type hints, changed list[float] to List[Float]

* fixed type hints, changed List[Float] to List[float]

* fix typo in the comment: Pepare => Prepare

---------

Signed-off-by: 385olt <385olt@gmail.com>
2023-07-19 18:36:49 -04:00
AMOGUS
5f0aaf8bdb python binding's TopP also needs some love
Changed the Python binding's TopP from 0.1 to 0.4

Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com>
2023-07-19 10:36:23 -04:00
cosmic-snow
2d02c65177 Handle edge cases when generating embeddings (#1215)
* Handle edge cases when generating embeddings
* Improve Python handling & add llmodel_c.h note
- In the Python bindings fail fast with a ValueError when text is empty
- Advice other bindings authors to do likewise in llmodel_c.h
2023-07-17 13:21:03 -07:00
Adam Treat
f543affa9a Add better docs and threading support to bert. 2023-07-14 14:14:22 -04:00
Adam Treat
6656f0f41e Fix the test to work and not do timings. 2023-07-14 09:48:57 -04:00
Adam Treat
bb2b82e1b9 Add docs and bump version since we changed python api again. 2023-07-14 09:48:57 -04:00
Aaron Miller
c77ab849c0 LLModel objects should hold a reference to the library
prevents llmodel lib from being gc'd before live model objects
2023-07-14 09:48:57 -04:00
Aaron Miller
936dcd2bfc use default n_threads 2023-07-14 09:48:57 -04:00
Aaron Miller
15f1fe5445 rename embedder 2023-07-14 09:48:57 -04:00
Adam Treat
ee4186d579 Fixup bert python bindings. 2023-07-14 09:48:57 -04:00
Adam Treat
0efdbfcffe Bert 2023-07-13 14:21:46 -04:00
Aaron Miller
ed470e18b3 python: Only eval latest message in chat sessions (#1149)
* python: Only eval latest message in chat sessions

* python: version bump
2023-07-06 21:02:14 -04:00
Aaron Miller
6987910668 python bindings: typing fixes, misc fixes (#1131)
* python: do not mutate locals()

* python: fix (some) typing complaints

* python: queue sentinel need not be a str

* python: make long inference tests opt in
2023-07-03 21:30:24 -04:00
Andriy Mulyar
01bd3d6802 Python chat streaming (#1127)
* Support streaming in chat session

* Uncommented tests
2023-07-03 12:59:39 -04:00
Andriy Mulyar
19412cfa5d Clear chat history between chat sessions (#1116) 2023-06-30 20:50:38 -04:00
Aaron Miller
3599663a22 bindings/python: type assert 2023-06-30 21:07:21 -03:00
Aaron Miller
958c8d4fa5 bindings/python: long input tests 2023-06-30 21:07:21 -03:00
Aaron Miller
ac5c8e964f bindings/python: fix typo (#1111) 2023-06-30 17:00:42 -04:00
Andriy Mulyar
46a0762bd5 Python Bindings: Improved unit tests, documentation and unification of API (#1090)
* Makefiles, black, isort

* Black and isort

* unit tests and generation method

* chat context provider

* context does not reset

* Current state

* Fixup

* Python bindings with unit tests

* GPT4All Python Bindings: chat contexts, tests

* New python bindings and backend fixes

* Black and Isort

* Documentation error

* preserved n_predict for backwords compat with langchain

---------

Co-authored-by: Adam Treat <treat.adam@gmail.com>
2023-06-30 16:02:02 -04:00
Aaron Miller
b19a3e5b2c add requiredMem method to llmodel impls
most of these can just shortcut out of the model loading logic llama is a bit worse to deal with because we submodule it so I have to at least parse the hparams, and then I just use the size on disk as an estimate for the mem size (which seems reasonable since we mmap() the llama files anyway)
2023-06-26 18:27:58 -03:00
EKal-aa
aed7b43143 set n_threads in GPT4All python bindings (#1042)
* set n_threads in GPT4All

* changed default n_threads to None
2023-06-23 01:16:35 -07:00
standby24x7
cdea838671 Fix spelling typo in gpt4all.py (#1007)
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
2023-06-18 14:07:46 -04:00
Richard Guo
a99cc34efb fix prompt context so it's preserved in class 2023-06-13 09:07:08 -04:00
Richard Guo
e0a8480c0e Generator in Python Bindings - streaming yields tokens at a time (#895)
* generator method

* cleanup

* bump version number for clarity

* added replace in decode to avoid unicodedecode exception

* revert back to _build_prompt
2023-06-09 10:17:44 -04:00
Richard Guo
c4706d0c14 Replit Model (#713)
* porting over replit code model to gpt4all

* replaced memory with kv_self struct

* continuing debug

* welp it built but lot of sus things

* working model loading and somewhat working generate.. need to format response?

* revert back to semi working version

* finally got rid of weird formatting

* figured out problem is with python bindings - this is good to go for testing

* addressing PR feedback

* output refactor

* fixed prompt reponse collection

* cleanup

* addressing PR comments

* building replit backend with new ggmlver code

* chatllm replit and clean python files

* cleanup

* updated replit to match new llmodel api

* match llmodel api and change size_t to Token

* resolve PR comments

* replit model commit comment
2023-06-06 17:09:00 -04:00
Andriy Mulyar
ef35eb496f Supports downloading officially supported models not hosted on gpt4all R2 2023-06-06 16:21:02 -04:00
Richard Guo
9d2b20f6cd small typo fix 2023-06-02 12:32:26 -04:00
Richard Guo
e709e58603 more cleanup 2023-06-02 12:32:26 -04:00
Richard Guo
13fc50f2d3 cleanup 2023-06-02 12:32:26 -04:00
Richard Guo
c54c42e3fb fixed finding model libs 2023-06-02 12:32:26 -04:00