mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2025-09-17 08:18:51 +00:00
repo: use the new GPT4All website URL (#2915)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
This commit is contained in:
@@ -1,7 +1,7 @@
|
||||
<h1 align="center">GPT4All</h1>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://gpt4all.io">Website</a> • <a href="https://docs.gpt4all.io">Documentation</a> • <a href="https://discord.gg/mGZE39AS3e">Discord</a>
|
||||
<a href="https://www.nomic.ai/gpt4all">Website</a> • <a href="https://docs.gpt4all.io">Documentation</a> • <a href="https://discord.gg/mGZE39AS3e">Discord</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -89,7 +89,7 @@ with model.chat_session():
|
||||
- Improved user workflow for LocalDocs
|
||||
- Expanded access to more model architectures
|
||||
- **October 19th, 2023**: GGUF Support Launches with Support for:
|
||||
- Mistral 7b base model, an updated model gallery on [gpt4all.io](https://gpt4all.io), several new local code models including Rift Coder v1.5
|
||||
- Mistral 7b base model, an updated model gallery on our website, several new local code models including Rift Coder v1.5
|
||||
- [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) support for Q4\_0 and Q4\_1 quantizations in GGUF.
|
||||
- Offline build support for running old versions of the GPT4All Local LLM Chat Client.
|
||||
- **September 18th, 2023**: [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) launches supporting local LLM inference on NVIDIA and AMD GPUs.
|
||||
|
Reference in New Issue
Block a user