ColossalAI/examples/language/bert
Hongxin Liu d202cc28c0
[npu] change device to accelerator api (#5239)
* update accelerator

* fix timer

* fix amp

* update

* fix

* update bug

* add error raise

* fix autocast

* fix set device

* remove doc accelerator

* update doc

* update doc

* update doc

* use nullcontext

* update cpu

* update null context

* change time limit for example

* udpate

* update

* update

* update

* [npu] polish accelerator code

---------

Co-authored-by: Xuanlei Zhao <xuanlei.zhao@gmail.com>
Co-authored-by: zxl <43881818+oahzxl@users.noreply.github.com>
2024-01-09 10:20:05 +08:00
..
benchmark_utils.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
benchmark.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
benchmark.sh [booster] update bert example, using booster api (#3885) 2023-06-07 15:51:00 +08:00
data.py [misc] update pre-commit and run all files (#4752) 2023-09-19 14:20:26 +08:00
finetune.py [npu] change device to accelerator api (#5239) 2024-01-09 10:20:05 +08:00
README.md [shardformer] update shardformer readme (#4617) 2023-09-05 13:14:41 +08:00
requirements.txt [booster] update bert example, using booster api (#3885) 2023-06-07 15:51:00 +08:00
test_ci.sh [shardformer] update bert finetune example with HybridParallelPlugin (#4584) 2023-09-04 21:46:29 +08:00

Overview

This directory includes two parts: Using the Booster API finetune Huggingface Bert and AlBert models and benchmarking Bert and AlBert models with different Booster Plugin.

Finetune

bash test_ci.sh

Bert-Finetune Results

Plugin Accuracy F1-score GPU number
torch_ddp 84.4% 88.6% 2
torch_ddp_fp16 84.7% 88.8% 2
gemini 84.0% 88.4% 2
hybrid_parallel 84.5% 88.6% 4

Benchmark

bash benchmark.sh

Now include these metrics in benchmark: CUDA mem occupy, throughput and the number of model parameters. If you have custom metrics, you can add them to benchmark_util.

Results

Bert

max cuda mem throughput(sample/s) params
ddp 21.44 GB 3.0 82M
ddp_fp16 16.26 GB 11.3 82M
gemini 11.0 GB 12.9 82M
low_level_zero 11.29 G 14.7 82M

AlBert

max cuda mem throughput(sample/s) params
ddp OOM
ddp_fp16 OOM
gemini 69.39 G 1.3 208M
low_level_zero 56.89 G 1.4 208M