Logo
Explore Help
Register Sign In
github/ColossalAI
1
0
Fork 0
You've already forked ColossalAI
mirror of https://github.com/hpcaitech/ColossalAI.git synced 2026-01-19 00:55:09 +00:00
Code Issues Packages Projects Releases Wiki Activity
Files
504ff1d101934a897d33c2c3ff33fdcaa90a5a2b
ColossalAI/colossalai/nn
History
Jiarui Fang 504ff1d101 [embeddings] use cache_ratio instead of cuda_row_num (#1611)
2022-09-20 14:33:04 +08:00
..
_ops
[NFC] polish colossalai/nn/_ops/embedding.py code style (#1561)
2022-09-08 22:11:04 +08:00
graph
[NFC] polish doc style for ColoTensor (#1457)
2022-08-16 09:21:05 +08:00
layer
[NFC] polish colossalai/nn/layer/colossalai_layer/dropout.py code style (#1568)
2022-09-08 22:11:04 +08:00
loss
[NFC] polish colossalai/nn/loss/loss_2p5d.py code style (#1553)
2022-09-08 22:11:04 +08:00
lr_scheduler
[NFC] polish colossalai/nn/lr_scheduler/multistep.py code style (#1572)
2022-09-08 22:11:04 +08:00
metric
[hotfix] Raise messages for indivisible batch sizes with tensor parallelism (#622)
2022-04-02 16:12:04 +08:00
optimizer
fix nvme docstring (#1450)
2022-08-12 18:01:02 +08:00
parallel
[embeddings] use cache_ratio instead of cuda_row_num (#1611)
2022-09-20 14:33:04 +08:00
__init__.py
[pipeline] refactor the pipeline module (#1087)
2022-06-10 11:27:38 +08:00
init.py
[NFC] polish colossalai/nn/init.py code style (#1292)
2022-07-13 10:51:55 +08:00
Powered by Gitea Version: 1.25.2 Page: 865ms Template: 16ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API