Logo
Explore Help
Register Sign In
github/ColossalAI
1
0
Fork 0
You've already forked ColossalAI
mirror of https://github.com/hpcaitech/ColossalAI.git synced 2026-01-13 11:34:37 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
367c61581806fbaa7e85f47830dd3f69b3e60bd9
ColossalAI/colossalai/nn
History
ver217 367c615818 fix nvme docstring (#1450)
2022-08-12 18:01:02 +08:00
..
_ops
[tensor] added linear implementation for the new sharding spec (#1416)
2022-08-12 11:33:09 +08:00
graph
[refactor] move process group from _DistSpec to ColoTensor. (#1203)
2022-07-06 16:15:16 +08:00
layer
[NFC] polish colossalai/nn/layer/wrapper/pipeline_wrapper.py code style (#1303)
2022-07-13 19:01:07 +08:00
loss
[tensor] add unitest for colo_tensor 1DTP cross_entropy (#1230)
2022-07-07 19:17:23 +08:00
lr_scheduler
[NFC] polish colossalai/nn/lr_scheduler/onecycle.py code style (#1269)
2022-07-13 12:08:21 +08:00
metric
[hotfix] Raise messages for indivisible batch sizes with tensor parallelism (#622)
2022-04-02 16:12:04 +08:00
optimizer
fix nvme docstring (#1450)
2022-08-12 18:01:02 +08:00
parallel
[FAW] reorganize the inheritance struct of FreqCacheEmbedding (#1448)
2022-08-12 15:55:46 +08:00
__init__.py
[pipeline] refactor the pipeline module (#1087)
2022-06-10 11:27:38 +08:00
init.py
[NFC] polish colossalai/nn/init.py code style (#1292)
2022-07-13 10:51:55 +08:00
Powered by Gitea Version: 1.25.2 Page: 1279ms Template: 105ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API