diff --git a/README.md b/README.md index 5bd6255e..097a8e65 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@ # Try it yourself -Clone this repository down, go the `chat` directory and download the CPU quantized gpt4all model. +Clone this repository down and download the CPU quantized gpt4all model. - [gpt4all-quantized](https://s3.amazonaws.com/static.nomic.ai/gpt4all/models/gpt4all-lora-quantized.bin) Place the quantized model in the `chat` directory and start chatting by running: