diff --git a/README.md b/README.md index 4bbc7212..663785b4 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,8 @@ Download the CPU quantized gpt4all model checkpoint: [gpt4all-lora-quantized.bin Clone this repository down and place the quantized model in the `chat` directory and start chatting by running: - `cd chat;./gpt4all-lora-quantized-OSX-m1` on M1 Mac/OSX -- `cd chat;./gpt4all-lora-quantized-linux-x86` on Windows/Linux +- `cd chat;./gpt4all-lora-quantized-linux-x86` on Linux +- `cd chat;./gpt4all-lora-quantized-win64.exe` on Windows (PowerShell) To compile for custom hardware, see our fork of the [Alpaca C++](https://github.com/zanussbaum/gpt4all.cpp) repo. diff --git a/chat/gpt4all-lora-quantized-win64.exe b/chat/gpt4all-lora-quantized-win64.exe new file mode 100644 index 00000000..d97987c9 Binary files /dev/null and b/chat/gpt4all-lora-quantized-win64.exe differ