ColossalAI/colossalai/shardformer/policies
Hongxin Liu 19e1a5cf16
[shardformer] update colo attention to support custom mask (#5510)
* [feature] refactor colo attention (#5462)

* [extension] update api

* [feature] add colo attention

* [feature] update sdpa

* [feature] update npu attention

* [feature] update flash-attn

* [test] add flash attn test

* [test] update flash attn test

* [shardformer] update modeling to fit colo attention (#5465)

* [misc] refactor folder structure

* [shardformer] update llama flash-attn

* [shardformer] fix llama policy

* [devops] update tensornvme install

* [test] update llama test

* [shardformer] update colo attn kernel dispatch

* [shardformer] update blip2

* [shardformer] update chatglm

* [shardformer] update gpt2

* [shardformer] update gptj

* [shardformer] update opt

* [shardformer] update vit

* [shardformer] update colo attention mask prep

* [shardformer] update whisper

* [test] fix shardformer tests (#5514)

* [test] fix shardformer tests

* [test] fix shardformer tests
2024-03-27 11:19:32 +08:00
..
__init__.py [shardformer] init shardformer code structure (#3731) 2023-07-04 16:05:01 +08:00
auto_policy.py [shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088) 2023-11-28 16:54:42 +08:00
base_policy.py [shardformer] fix gathering output when using tensor parallelism (#5431) 2024-03-18 15:55:11 +08:00
bert.py [pipeline]: fix p2p comm, add metadata cache and support llama interleaved pp (#5134) 2023-12-22 10:44:00 +08:00
blip2.py [hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926) 2023-11-03 13:32:43 +08:00
bloom.py [shardformer]: support gpt-j, falcon, Mistral and add interleaved pipeline for bert (#5088) 2023-11-28 16:54:42 +08:00
chatglm2.py [Inference] Fix bug in ChatGLM2 Tensor Parallelism (#5014) 2023-11-07 15:01:50 +08:00
falcon.py fix typo change dosen't to doesn't (#5308) 2024-01-30 09:57:38 +08:00
gpt2.py [shardformer] update colo attention to support custom mask (#5510) 2024-03-27 11:19:32 +08:00
gptj.py [shardformer] update colo attention to support custom mask (#5510) 2024-03-27 11:19:32 +08:00
llama.py [shardformer] update colo attention to support custom mask (#5510) 2024-03-27 11:19:32 +08:00
mistral.py fix typo change dosen't to doesn't (#5308) 2024-01-30 09:57:38 +08:00
opt.py [shardformer] update colo attention to support custom mask (#5510) 2024-03-27 11:19:32 +08:00
sam.py [hotfix] Add layer norm gradients all-reduce for sequence parallel (#4926) 2023-11-03 13:32:43 +08:00
t5.py fix typo change dosen't to doesn't (#5308) 2024-01-30 09:57:38 +08:00
vit.py fix typo change dosen't to doesn't (#5308) 2024-01-30 09:57:38 +08:00
whisper.py [shardformer] update colo attention to support custom mask (#5510) 2024-03-27 11:19:32 +08:00