Update TRAINING_LOG.md

This commit is contained in:
Zach Nussbaum 2023-03-28 13:46:51 -07:00 committed by GitHub
parent 1caf76f230
commit 5cfb794e0a

View File

@ -234,4 +234,4 @@ Taking inspiration from [the Alpaca Repo](https://github.com/tatsu-lab/stanford_
Comparing our model LoRa to the [Alpaca LoRa](https://huggingface.co/tloen/alpaca-lora-7b), our model has lower perplexity. Qualitatively, training on 3 epochs performed the best on perplexity as well as qualitative examples.
We tried training a full model using the parameters above, but found that during the second epoch the model overfit.
We tried training a full model using the parameters above, but found that during the second epoch the model diverged and samples generated post training were worse than the first epoch.