diff --git a/README.md b/README.md index b9ee29db..42e91fb4 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@ # Try it yourself You can download pre-compiled LLaMa C++ Interactive Chat binaries here: -- [OSX Executable](https://s3.amazonaws.com/static.nomic.ai/gpt4all/models/gpt4all-lora-quantized-OSX-m1) +- [OSX](https://s3.amazonaws.com/static.nomic.ai/gpt4all/models/gpt4all-lora-quantized-OSX-m1) - [Intel/Windows]() and the model