ColossalAI/tests/test_shardformer
duanjunwen 7795d4c50d
[Feature] Support Distributed LogProb for GRPO Training (#6247)
* [fix] fix qwen VocabParallelLMHead1D and gather output

* fix tp bug

* fix consumer

* [feat] Support Distributed LogProb for GRPO Training

* [fix] fix loss func

* [fix] fix log prob plugin

* [fix] fix qwen modeling param

* [fix] rm comments

* [fix] rm hard-code;fix non-dist version

* [fix] fix test file param name and benchmark tp gather output=True/False

* [fix] rm non-dist version in dist log prob

* [fix] fix comments

* [fix] fix dis log prob plugin

* [fix] fix test case

* [fix] fix qwen VocabParallelLMHead1D and gather output

* [fix] fix DistLogProb comments

* [fix] restore tp size

* [fix] fix comments

* [fix] fix comment; fix LogSoftmax usage

---------

Co-authored-by: Tong Li <tong.li35271158@gmail.com>
2025-03-18 17:47:55 +08:00
..
test_hybrid_parallel_grad_clip_norm [MoE/ZeRO] Moe refactor with zero refactor (#5821) 2024-06-28 14:00:08 +08:00
test_layer [Feature] Support Distributed LogProb for GRPO Training (#6247) 2025-03-18 17:47:55 +08:00
test_model [release] update version (#6195) 2025-02-20 11:36:18 +08:00
__init__.py
test_flash_attention.py [Feature] Zigzag Ring attention (#5905) 2024-08-16 13:56:38 +08:00
test_shard_utils.py
test_with_torch_ddp.py