From b8464073b8f771d7e767c14626c174211ba2ebb0 Mon Sep 17 00:00:00 2001 From: AMOGUS <137312610+Amogus8P@users.noreply.github.com> Date: Tue, 27 Jun 2023 17:49:45 +0300 Subject: [PATCH] Update gpt4all_chat.md (#1050) * Update gpt4all_chat.md Cleaned up and made the sideloading part more readable, also moved Replit architecture to supported ones. (+ renamed all "ggML" to "GGML" because who calls it "ggML"??) Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com> * Removed the prefixing part Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com> * Bump version Signed-off-by: Andriy Mulyar --------- Signed-off-by: AMOGUS <137312610+Amogus8P@users.noreply.github.com> Signed-off-by: Andriy Mulyar Co-authored-by: Andriy Mulyar --- gpt4all-bindings/python/docs/gpt4all_chat.md | 12 ++++++------ gpt4all-bindings/python/setup.py | 2 +- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/gpt4all-bindings/python/docs/gpt4all_chat.md b/gpt4all-bindings/python/docs/gpt4all_chat.md index 3c961e34..36a6b97e 100644 --- a/gpt4all-bindings/python/docs/gpt4all_chat.md +++ b/gpt4all-bindings/python/docs/gpt4all_chat.md @@ -5,17 +5,17 @@ The [GPT4All Chat Client](https://gpt4all.io) lets you easily interact with any It is optimized to run 7-13B parameter LLMs on the CPU's of any computer running OSX/Windows/Linux. ## Running LLMs on CPU -The GPT4All Chat UI supports models from all newer versions of `ggML`, `llama.cpp` including the `LLaMA`, `MPT` and `GPT-J` architectures. The `falcon` and `replit` architectures will soon also be supported. +The GPT4All Chat UI supports models from all newer versions of `GGML`, `llama.cpp` including the `LLaMA`, `MPT`, `replit` and `GPT-J` architectures. The `falcon` architecture will soon also be supported. GPT4All maintains an official list of recommended models located in [models.json](https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-chat/metadata/models.json). You can pull request new models to it and if accepted they will show up in the official download dialog. -#### Sideloading any ggML model +#### Sideloading any GGML model If a model is compatible with the gpt4all-backend, you can sideload it into GPT4All Chat by: -1. Downloading your model in ggML format. It should be a 3-8 GB file similar to the ones [here](https://huggingface.co/TheBloke/Samantha-7B-GGML/tree/main). -2. Identifying your GPT4All Chat downloads folder. This is the path listed at the bottom of the download dialog. -3. Prefixing your downloaded model with string `ggml-` and placing it into the GPT4All Chat downloads folder. -4. Restarting your chat app. Your model should appear in the download dialog. +1. Downloading your model in GGML format. It should be a 3-8 GB file similar to the ones [here](https://huggingface.co/TheBloke/Samantha-7B-GGML/tree/main). +2. Identifying your GPT4All model downloads folder. This is the path listed at the bottom of the downloads dialog(Three lines in top left>Downloads). +3. Placing your downloaded model inside the GPT4All's model downloads folder. +4. Restarting your GPT4ALL app. Your model should appear in the model selection list. ## Plugins GPT4All Chat Plugins allow you to expand the capabilities of Local LLMs. diff --git a/gpt4all-bindings/python/setup.py b/gpt4all-bindings/python/setup.py index 4a99ed86..21658c07 100644 --- a/gpt4all-bindings/python/setup.py +++ b/gpt4all-bindings/python/setup.py @@ -61,7 +61,7 @@ copy_prebuilt_C_lib(SRC_CLIB_DIRECtORY, setup( name=package_name, - version="0.3.5", + version="0.3.6", description="Python bindings for GPT4All", author="Richard Guo", author_email="richard@nomic.ai",