diff --git a/README.md b/README.md index a04833a6..d92e50b0 100644 --- a/README.md +++ b/README.md @@ -2,17 +2,15 @@
Open-source large language models that run locally on your CPU and nearly any GPU
+-GPT4All Website and Models +Join the GPT4All 2024 Roadmap Townhall on April 18, 2024 at 12pm EST
-GPT4All Documentation +GPT4All Website and Models • GPT4All Documentation • Discord
--Discord -
🦜️🔗 Official Langchain Backend @@ -35,9 +33,6 @@ Run on an M1 macOS Device (not sped up!) ## GPT4All: An ecosystem of open-source on-edge large language models. -> [!IMPORTANT] -> GPT4All v2.5.0 and newer only supports models in GGUF format (.gguf). Models used with a previous version of GPT4All (.bin extension) will no longer work. - GPT4All is an ecosystem to run **powerful** and **customized** large language models that work locally on consumer grade CPUs and any GPU. Note that your CPU needs to support [AVX or AVX2 instructions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions). Learn more in the [documentation](https://docs.gpt4all.io).