1
0
mirror of https://github.com/hpcaitech/ColossalAI.git synced 2025-05-07 07:58:27 +00:00
ColossalAI/tests/test_shardformer
littsk 1a3315e336
[hotfix] Add layer norm gradients all-reduce for sequence parallel ()
* [hotfix] Add layer norm gradients all-reduce for sequence parallel. ()

* Add layer norm gradients all-reduce for sequence parallel.

* skip pipeline inference test

* [hotfix] fixing polices of sequence parallel ()

* Add layer norm gradients all-reduce for sequence parallel.

* fix parameter passing when calling get_autopolicy

---------

Co-authored-by: littsk <1214689160@qq.com>

* Hotfix/add grad all reduce for sequence parallel ()

* Add layer norm gradients all-reduce for sequence parallel.


* fix parameter passing when calling get_autopolicy

* fix bug using wrong variables

---------

Co-authored-by: littsk <1214689160@qq.com>

* fix policy initialization

* fix bloom and chatglm policices

* polish code of handling layernorm

* fix moe module

* polish code of class initializing

---------

Co-authored-by: Zhongkai Zhao <kanezz620@gmail.com>
2023-11-03 13:32:43 +08:00
..
test_hybrid_parallel_grad_clip_norm [hotfix] Add layer norm gradients all-reduce for sequence parallel () 2023-11-03 13:32:43 +08:00
test_layer [misc] update pre-commit and run all files () 2023-09-19 14:20:26 +08:00
test_model [hotfix] Add layer norm gradients all-reduce for sequence parallel () 2023-11-03 13:32:43 +08:00
__init__.py [shardformer] adapted T5 and LLaMa test to use kit () 2023-07-04 16:05:01 +08:00
test_shard_utils.py [misc] update pre-commit and run all files () 2023-09-19 14:20:26 +08:00
test_with_torch_ddp.py [misc] update pre-commit and run all files () 2023-09-19 14:20:26 +08:00