1
0
mirror of https://github.com/hpcaitech/ColossalAI.git synced 2025-05-08 00:18:25 +00:00
Commit Graph

10 Commits

Author SHA1 Message Date
Hongxin Liu
079bf3cb26
[misc] update pre-commit and run all files ()
* [misc] update pre-commit

* [misc] run pre-commit

* [misc] remove useless configuration files

* [misc] ignore cuda for clang-format
2023-09-19 14:20:26 +08:00
Frank Lee
40d376c566
[setup] support pre-build and jit-build of cuda kernels ()
* [setup] support pre-build and jit-build of cuda kernels

* polish code

* polish code

* polish code

* polish code

* polish code

* polish code
2023-01-06 20:50:26 +08:00
Jiarui Fang
16cc8e6aa7
[builder] MOE builder () 2023-01-03 20:29:39 +08:00
ver217
f8a7148dec
[kernel] move all symlinks of kernel to colossalai._C () 2022-11-17 13:42:33 +08:00
HELSON
aff9d354f7
[MOE] polish moe_env () 2022-03-19 15:36:25 +08:00
HELSON
bccbc15861
[MOE] changed parallelmode to dist process group () 2022-03-19 13:46:29 +08:00
1SAA
82023779bb Added TPExpert for special situation 2022-03-11 15:50:28 +08:00
HELSON
36b8477228 Fixed parameter initialization in FFNExpert () 2022-03-11 15:50:28 +08:00
1SAA
219df6e685 Optimized MoE layer and fixed some bugs;
Decreased moe tests;

Added FFNExperts and ViTMoE model
2022-03-11 15:50:28 +08:00
HELSON
dceae85195
Added MoE parallel () 2022-01-07 15:08:36 +08:00