From aa4dd0eaefda8cb68e672f09b1d624aa5b524916 Mon Sep 17 00:00:00 2001 From: Andriy Mulyar Date: Wed, 29 Mar 2023 12:26:47 -0400 Subject: [PATCH] Qualified number of epochs for LoRa weights --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 19120065..770a2806 100644 --- a/README.md +++ b/README.md @@ -33,8 +33,8 @@ Note: the full model on GPU (16GB of RAM required) performs much better in our q # Reproducibility Trained LoRa Weights: -- gpt4all-lora: https://huggingface.co/nomic-ai/gpt4all-lora -- gpt4all-lora-epoch-2 https://huggingface.co/nomic-ai/gpt4all-lora-epoch-2 +- gpt4all-lora (four full epochs of training): https://huggingface.co/nomic-ai/gpt4all-lora +- gpt4all-lora-epoch-2 (three full epochs of training) https://huggingface.co/nomic-ai/gpt4all-lora-epoch-2 Raw Data: - [Training Data Without P3](https://s3.amazonaws.com/static.nomic.ai/gpt4all/2022_03_27/gpt4all_curated_data_without_p3_2022_03_27.tar.gz)