pingpongching
f84dfd0e9c
Change the default values for generation in GUI
2023-06-09 08:51:09 -04:00
Adam Treat
a238def862
Forgot to bump.
2023-06-09 08:45:31 -04:00
Richard Guo
a47d501e83
update models json with replit model
2023-06-09 08:44:46 -04:00
Adam Treat
b961b6c3ef
Always sync for circleci.
2023-06-09 08:42:49 -04:00
Claudius Ellsel
1c74e369fe
Move usage in Python bindings readme to own section ( #907 )
...
Have own section for short usage example, as it is not specific to local build
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
2023-06-09 10:13:35 +02:00
niansa
557c82b5ed
Synced llama.cpp.cmake with upstream
2023-06-08 18:21:32 -04:00
Adam Treat
7e38cf6d4b
Circleci builds for Linux, Windows, and macOS for gpt4all-chat.
2023-06-08 18:02:44 -04:00
Aaron Miller
9afbaee94e
non-llama: explicitly greedy sampling for temp<=0 ( #901 )
...
copied directly from llama.cpp - without this temp=0.0 will just
scale all the logits to infinity and give bad output
2023-06-08 11:08:30 -07:00
Aaron Miller
6624d7b2dd
sampling: remove incorrect offset for n_vocab ( #900 )
...
no effect, but avoids a *potential* bug later if we use
actualVocabSize - which is for when a model has a larger
embedding tensor/# of output logits than actually trained token
to allow room for adding extras in finetuning - presently all of our
models have had "placeholder" tokens in the vocab so this hasn't broken
anything, but if the sizes did differ we want the equivalent of
`logits[actualVocabSize:]` (the start point is unchanged), not
`logits[-actualVocabSize:]` (this.)
2023-06-08 11:08:10 -07:00
Andriy Mulyar
c6249c5664
Update CollectionsDialog.qml ( #856 )
...
Phrasing for localdocs
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-08 13:44:17 -04:00
Claudius Ellsel
d5ec6a07c3
Update README.md ( #906 )
...
Add PyPI link and add clickable, more specific link to documentation
Signed-off-by: Claudius Ellsel <claudius.ellsel@live.de>
2023-06-08 13:43:31 -04:00
Adam Treat
26c19c042f
Revert "Synced llama.cpp.cmake with upstream ( #887 )"
...
This reverts commit 5c5e10c1f5
.
2023-06-08 07:23:41 -04:00
Adam Treat
b78f60c7f7
Fix for windows.
2023-06-07 12:58:51 -04:00
niansa/tuxifan
5c5e10c1f5
Synced llama.cpp.cmake with upstream ( #887 )
2023-06-07 09:18:22 -07:00
Richard Guo
7a472bea88
Replit Model ( #713 )
...
* porting over replit code model to gpt4all
* replaced memory with kv_self struct
* continuing debug
* welp it built but lot of sus things
* working model loading and somewhat working generate.. need to format response?
* revert back to semi working version
* finally got rid of weird formatting
* figured out problem is with python bindings - this is good to go for testing
* addressing PR feedback
* output refactor
* fixed prompt reponse collection
* cleanup
* addressing PR comments
* building replit backend with new ggmlver code
* chatllm replit and clean python files
* cleanup
* updated replit to match new llmodel api
* match llmodel api and change size_t to Token
* resolve PR comments
* replit model commit comment
2023-06-06 17:09:00 -04:00
Andriy Mulyar
a3aab14f61
Supports downloading officially supported models not hosted on gpt4all R2
2023-06-06 16:21:02 -04:00
Andriy Mulyar
555449a03c
Update gpt4all_faq.md ( #861 )
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-05 15:41:30 -04:00
Ettore Di Giacinto
36dc23d527
Set thread counts after loading model ( #836 )
2023-06-05 21:35:40 +02:00
Adam Treat
87e4ec4253
New release notes
2023-06-05 14:55:59 -04:00
Adam Treat
318c9e8063
Bump the version.
2023-06-05 14:32:00 -04:00
Adam Treat
fff1194b38
Fix llama models on linux and windows.
2023-06-05 14:31:15 -04:00
Andriy Mulyar
379f1809f9
Revert "Fix bug with resetting context with chatgpt model." ( #859 )
...
This reverts commit e0dcf6a14f
.
2023-06-05 14:25:37 -04:00
Adam Treat
4a142d41f6
Revert "Speculative fix for windows llama models with installer."
...
This reverts commit add725d1eb
.
2023-06-05 14:25:01 -04:00
Adam Treat
add725d1eb
Speculative fix for windows llama models with installer.
2023-06-05 13:21:08 -04:00
Andriy Mulyar
80598cbde8
Documentation for model sideloading ( #851 )
...
* Documentation for model sideloading
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
* Update gpt4all_chat.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-05 12:35:02 -04:00
AT
8989bf4e7c
Update README.md ( #854 )
...
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-06-05 12:25:51 -04:00
AT
50234df356
Release notes for version 2.4.5 ( #853 )
2023-06-05 12:10:17 -04:00
Richard Guo
ceb9cdad2e
updated pypi version
2023-06-05 12:02:25 -04:00
Adam Treat
31b1f966e0
Fix symbol resolution on windows.
2023-06-05 11:19:02 -04:00
Adam Treat
2d2de658c8
Fix installers for windows and linux.
2023-06-05 10:50:16 -04:00
Adam Treat
f79a88585c
These need to be installed for them to be packaged and work for both mac and windows.
2023-06-05 09:57:00 -04:00
Adam Treat
fff2324a14
Fix compile on mac.
2023-06-05 09:31:57 -04:00
Adam Treat
b7d947b01d
Try and fix mac.
2023-06-05 09:30:50 -04:00
Adam Treat
45eca601c7
Need this so the linux installer packages it as a dependency.
2023-06-05 09:23:43 -04:00
Adam Treat
3cd7d2f3c7
Make installers work with mac/windows for big backend change.
2023-06-05 09:23:17 -04:00
Andriy Mulyar
6c4e1f4a2a
Update models.json
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:48:45 -04:00
Andriy Mulyar
3354dd39f1
Update models.json ( #838 )
...
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:36:12 -04:00
AT
b217b823d3
Remove older models that are not as popular. ( #837 )
...
* Remove older models that are not as popular.
* Update models.json
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
---------
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
Co-authored-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-06-04 20:26:43 -04:00
Adam Treat
1a8548b876
Update to latest llama.cpp
2023-06-04 19:57:34 -04:00
Adam Treat
b36ea3dde5
Fix up for newer models on reset context. This fixes the model from totally failing after a reset context.
2023-06-04 19:31:20 -04:00
Adam Treat
825fa64c17
Allow for download of models hosted on third party hosts.
2023-06-04 19:02:43 -04:00
Adam Treat
42d349e1e8
Try again with the url.
2023-06-04 18:39:36 -04:00
Adam Treat
75565fc95d
Trying out a new feature to download directly from huggingface.
2023-06-04 18:34:04 -04:00
AT
54ec3c1259
Update build_and_run.md ( #834 )
...
Signed-off-by: AT <manyoso@users.noreply.github.com>
2023-06-04 15:39:32 -04:00
AT
964e2ffc1b
We no longer have an avx_only repository and better error handling for minimum hardware requirements. ( #833 )
2023-06-04 15:28:58 -04:00
Adam Treat
a84bcccb3a
Better error handling when the model fails to load.
2023-06-04 14:55:05 -04:00
AT
b5971b0d41
Backend prompt dedup ( #822 )
...
* Deduplicated prompt() function code
2023-06-04 08:59:24 -04:00
Ikko Eltociear Ashimine
e53195a002
Update README.md
...
huggingface -> Hugging Face
Signed-off-by: Ikko Eltociear Ashimine <eltociear@gmail.com>
2023-06-04 08:46:37 -04:00
Adam Treat
1564eecd7c
Add a collection immediately and show a placeholder + busy indicator in localdocs settings.
2023-06-03 10:09:17 -04:00
Peter Gagarinov
21df8a771e
Only default mlock on macOS where swap seems to be a problem
...
Repeating the change that once was done in https://github.com/nomic-ai/gpt4all/pull/663 but then was overriden by 9c6c09cbd2
Signed-off-by: Peter Gagarinov <pgagarinov@users.noreply.github.com>
2023-06-03 07:51:18 -04:00