From 7ff11b5537123b50d8b1b3b0fbaca0fa31d9481b Mon Sep 17 00:00:00 2001 From: binmakeswell Date: Mon, 17 Jul 2023 21:07:44 +0800 Subject: [PATCH] [example] add llama pretraining (#4257) --- README.md | 11 +++++++++++ docs/README-zh-Hans.md | 10 ++++++++++ examples/language/llama/README.md | 11 +++++++++++ 3 files changed, 32 insertions(+) create mode 100644 examples/language/llama/README.md diff --git a/README.md b/README.md index 34c8a6b73..21670e1e5 100644 --- a/README.md +++ b/README.md @@ -25,6 +25,7 @@ ## Latest News +* [2023/07] [65B Model Pretraining Accelerated by 38%, Best Practices for Building LLaMA-Like Base Models Open-Source](https://www.hpc-ai.tech/blog/large-model-pretraining) * [2023/03] [ColossalChat: An Open-Source Solution for Cloning ChatGPT With a Complete RLHF Pipeline](https://medium.com/@yangyou_berkeley/colossalchat-an-open-source-solution-for-cloning-chatgpt-with-a-complete-rlhf-pipeline-5edf08fb538b) * [2023/03] [Intel and Colossal-AI Partner to Deliver Cost-Efficient Open-Source Solution for Protein Folding Structure Prediction](https://www.hpc-ai.tech/blog/intel-habana) * [2023/03] [AWS and Google Fund Colossal-AI with Startup Cloud Programs](https://www.hpc-ai.tech/blog/aws-and-google-fund-colossal-ai-with-startup-cloud-programs) @@ -49,6 +50,7 @@
  • Parallel Training Demo