Logo
Explore Help
Register Sign In
github/ColossalAI
1
0
Fork 0
You've already forked ColossalAI
mirror of https://github.com/hpcaitech/ColossalAI.git synced 2025-09-28 13:05:26 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
015af592f805d84b1713a06f5324ae40b05c3e84
ColossalAI/colossalai/nn/layer
History
Frank Lee 015af592f8 [shardformer] integrated linear 1D with dtensor (#3996)
* [shardformer] integrated linear 1D with dtensor

* polish code
2023-07-04 16:05:01 +08:00
..
colossalai_layer
fixed using zero with tp cannot access weight correctly
2023-02-28 10:52:30 +08:00
moe
[doc] Fix typo under colossalai and doc(#3618)
2023-04-26 11:38:43 +08:00
parallel_1d
[shardformer] add Dropout layer support different dropout pattern (#3856)
2023-07-04 16:05:01 +08:00
parallel_2d
[utils] refactor parallel layers checkpoint and bcast model on loading checkpoint (#1548)
2022-09-06 20:18:35 +08:00
parallel_2p5d
[utils] refactor parallel layers checkpoint and bcast model on loading checkpoint (#1548)
2022-09-06 20:18:35 +08:00
parallel_3d
improved allgather & reducescatter for 3d
2023-01-03 17:46:08 +08:00
parallel_sequence
[nfc] fix typo colossalai/nn (#3887)
2023-06-05 16:04:27 +08:00
utils
[NFC] polish colossalai/nn/layer/utils/common.py code style (#983)
2022-05-17 10:25:06 +08:00
vanilla
added skip_bias_add for non-tp linear
2022-11-09 15:41:08 +08:00
wrapper
[NFC] polish colossalai/nn/layer/wrapper/pipeline_wrapper.py code style (#1303)
2022-07-13 19:01:07 +08:00
__init__.py
[MOE] changed parallelmode to dist process group (#460)
2022-03-19 13:46:29 +08:00
base_layer.py
[shardformer] integrated linear 1D with dtensor (#3996)
2023-07-04 16:05:01 +08:00
Powered by Gitea Version: 1.24.5 Page: 1195ms Template: 94ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API