mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-09-01 17:17:05 +00:00
[FP8] rebase main (#5963)
* add SimPO
* fix dataloader
* remove debug code
* add orpo
* fix style
* fix colossalai, transformers version
* fix colossalai, transformers version
* fix colossalai, transformers version
* fix torch colossalai version
* update transformers version
* [shardformer] DeepseekMoE support (#5871)
* [Feature] deepseek moe expert parallel implement
* [misc] fix typo, remove redundant file (#5867)
* [misc] fix typo
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Feature] deepseek support & unit test
* [misc] remove debug code & useless print
* [misc] fix typos (#5872)
* [Feature] remove modeling file, use auto config. (#5884)
* [misc] fix typos
* [Feature] deepseek support via auto model, remove modeling file
* [misc] delete useless file
* [misc] fix typos
* [Deepseek] remove redundant code (#5888)
* [misc] fix typos
* [Feature] deepseek support via auto model, remove modeling file
* [misc] delete useless file
* [misc] fix typos
* [misc] remove redundant code
* [Feature/deepseek] resolve comment. (#5889)
* [misc] fix typos
* [Feature] deepseek support via auto model, remove modeling file
* [misc] delete useless file
* [misc] fix typos
* [misc] remove redundant code
* [misc] mv module replacement into if branch
* [misc] add some warning message and modify some code in unit test
* [misc] fix typos
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Hoxfix] Fix CUDA_DEVICE_MAX_CONNECTIONS for comm overlap
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [Feat] Diffusion Model(PixArtAlpha/StableDiffusion3) Support (#5838)
* Diffusion Model Inference support
* Stable Diffusion 3 Support
* pixartalpha support
* [HotFix] CI,import,requirements-test for #5838 (#5892)
* [Hot Fix] CI,import,requirements-test
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [Feature] Enable PP + SP for llama (#5868)
* fix cross-PP-stage position id length diff bug
* fix typo
* fix typo
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* use a one cross entropy func for all shardformer models
---------
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [ShardFormer] Add Ulysses Sequence Parallelism support for Command-R, Qwen2 and ChatGLM (#5897)
* add benchmark for sft, dpo, simpo, orpo. Add benchmarking result. Support lora with gradient checkpoint
* fix style
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix eval
* hotfix citation
* [zero] support all-gather overlap (#5898)
* [zero] support all-gather overlap
* [zero] add overlap all-gather flag
* [misc] fix typo
* [zero] update api
* fix orpo cross entropy loss
* [Auto Parallel]: Speed up intra-op plan generation by 44% (#5446)
* Remove unnecessary calls to deepcopy
* Build DimSpec's difference dict only once
This change considerably speeds up construction speed of DimSpec objects. The difference_dict is the same for each DimSpec object, so a single copy of it is enough.
* Fix documentation of DimSpec's difference method
* [ShardFormer] fix qwen2 sp (#5903)
* [compatibility] support torch 2.2 (#5875)
* Support Pytorch 2.2.2
* keep build_on_pr file and update .compatibility
* fix object_to_tensor usage when torch>=2.3.0 (#5820)
* [misc] support torch2.3 (#5893)
* [misc] support torch2.3
* [devops] update compatibility ci
* [devops] update compatibility ci
* [devops] add debug
* [devops] add debug
* [devops] add debug
* [devops] add debug
* [devops] remove debug
* [devops] remove debug
* [release] update version (#5912)
* [plugin] support all-gather overlap for hybrid parallel (#5919)
* [plugin] fixed all-gather overlap support for hybrid parallel
* add kto
* fix style, add kto data sample
* [Examples] Add lazy init to OPT and GPT examples (#5924)
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [ColossalChat] Hotfix for ColossalChat (#5910)
* add ignore and tiny llama
* fix path issue
* run style
* fix issue
* update bash
* add ignore and tiny llama
* fix path issue
* run style
* fix issue
* update bash
* fix ddp issue
* add Qwen 1.5 32B
* refactor tokenization
* [FIX BUG] UnboundLocalError: cannot access local variable 'default_conversation' where it is not associated with a value (#5931)
* cannot access local variable 'default_conversation' where it is not associated with a value
set default value for 'default_conversation'
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* fix test data
* refactor evaluation
* remove real data path
* remove real data path
* Add n_fused as an input from native_module (#5894)
* [FIX BUG] convert env param to int in (#5934)
* [Hotfix] Fix ZeRO typo #5936
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
* [Feature] Add a switch to control whether the model checkpoint needs to be saved after each epoch ends (#5941)
* Add a switch to control whether the model checkpoint needs to be saved after each epoch ends
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* fix style
* fix style
* fix style
* [shardformer] hotfix attn mask (#5945)
* [shardformer] hotfix attn mask (#5947)
* [Feat] Distrifusion Acceleration Support for Diffusion Inference (#5895)
* Distrifusion Support source
* comp comm overlap optimization
* sd3 benchmark
* pixart distrifusion bug fix
* sd3 bug fix and benchmark
* generation bug fix
* naming fix
* add docstring, fix counter and shape error
* add reference
* readme and requirement
* [zero] hotfix update master params (#5951)
* [release] update version (#5952)
* [Chat] Fix lora (#5946)
* fix merging
* remove filepath
* fix style
* Update README.md (#5958)
* [hotfix] Remove unused plan section (#5957)
* remove readme
* fix readme
* update
* [test] add mixtral for sequence classification
* [test] add mixtral transformer test
* [moe] fix plugin
* [test] mixtra pp shard test
* [chore] handle non member group
* [zero] solve hang
* [test] pass mixtral shardformer test
* [moe] implement transit between non moe tp and ep
* [zero] solve hang
* [misc] solve booster hang by rename the variable
* solve hang when parallel mode = pp + dp
* [moe] implement submesh initialization
* [moe] add mixtral dp grad scaling when not all experts are activated
* [chore] manually revert unintended commit
* [chore] trivial fix
* [chore] arg pass & remove drop token
* [test] add mixtral modelling test
* [moe] implement tp
* [moe] test deepseek
* [moe] clean legacy code
* [Feature] MoE Ulysses Support (#5918)
* moe sp support
* moe sp bug solve
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
---------
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [chore] minor fix
* [moe] init moe plugin comm setting with sp
* moe sp + ep bug fix
* [moe] finalize test (no pp)
* [moe] full test for deepseek and mixtral (pp + sp to fix)
* [chore] minor fix after rebase
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* [chore] solve moe ckpt test failure and some other arg pass failure
* [moe] remove ops
* [test] fix test: test_zero1_2
* [bug] fix: somehow logger hangs the program
* [moe] deepseek moe sp support
* [test] add check
* [deepseek] replace attn (a workaround for bug in transformers)
* [misc] skip redunant test
* [misc] remove debug/print code
* [moe] refactor mesh assignment
* Revert "[moe] implement submesh initialization"
This reverts commit 2f9bce6686
.
* [chore] change moe_pg_mesh to private
* [misc] remove incompatible test config
* [misc] fix ci failure: change default value to false in moe plugin
* [misc] remove useless condition
* [chore] docstring
* [moe] remove force_overlap_comm flag and add warning instead
* [doc] add MoeHybridParallelPlugin docstring
* [moe] solve dp axis issue
* [chore] remove redundant test case, print string & reduce test tokens
* [feat] Dist Loader for Eval (#5950)
* support auto distributed data loader
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* support auto distributed data loader
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix tp error
* remove unused parameters
* remove unused
* update inference
* update docs
* update inference
---------
Co-authored-by: Michelle <qianranma8@gmail.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* [lora] lora support hybrid parallel plugin (#5956)
* lora support hybrid plugin
* fix
* fix
* fix
* fix
* fp8 operators for compressed communication
cast_to_fp8, cast_from_fp8, all_reduce_fp8
* fix scaling algorithm in FP8 casting
* support fp8 communication in pipeline parallelism
* add fp8_communication flag in the script
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix typo
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* shardformer fp8
* fix rebase
* remove all to all
* fix shardformer fp8 communication training degradation
* [fp8] support all-gather flat tensor (#5932)
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* fix
* Update low_level_optim.py
---------
Co-authored-by: YeAnbang <anbangy2@outlook.com>
Co-authored-by: Haze188 <haze188@qq.com>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Edenzzzz <wenxuan.tan@wisc.edu>
Co-authored-by: Edenzzzz <wtan45@wisc.edu>
Co-authored-by: Runyu Lu <77330637+LRY89757@users.noreply.github.com>
Co-authored-by: Guangyao Zhang <xjtu521@qq.com>
Co-authored-by: YeAnbang <44796419+YeAnbang@users.noreply.github.com>
Co-authored-by: Hongxin Liu <lhx0217@gmail.com>
Co-authored-by: Stephan Kö <stephankoe@users.noreply.github.com>
Co-authored-by: アマデウス <kurisusnowdeng@users.noreply.github.com>
Co-authored-by: Tong Li <tong.li352711588@gmail.com>
Co-authored-by: zhurunhua <1281592874@qq.com>
Co-authored-by: Insu Jang <insujang@umich.edu>
Co-authored-by: Gao, Ruiyuan <905370712@qq.com>
Co-authored-by: hxwang <wang1570@e.ntu.edu.sg>
Co-authored-by: Michelle <qianranma8@gmail.com>
Co-authored-by: Wang Binluo <32676639+wangbluo@users.noreply.github.com>
Co-authored-by: HangXu <hangxu0304@gmail.com>
This commit is contained in:
@@ -19,7 +19,6 @@ class GradientStore(BaseStore):
|
||||
"""
|
||||
self._grads_of_params = dict()
|
||||
# stage 2
|
||||
self._partition_grads = partition_grad
|
||||
self._working_index = 0 if partition_grad else self._local_rank
|
||||
# for zero2, it's `param_id: [grad_local_rank]`
|
||||
self.grad_to_param_mapping = dict()
|
||||
@@ -91,7 +90,7 @@ class GradientStore(BaseStore):
|
||||
|
||||
return grad_list
|
||||
|
||||
def get_working_grad_by_param_id(self, param_id) -> Tensor:
|
||||
def get_working_grad_by_param_id(self, param_id) -> Optional[Tensor]:
|
||||
"""
|
||||
Return the working gradient for the specified parameter.
|
||||
|
||||
@@ -112,6 +111,7 @@ class GradientStore(BaseStore):
|
||||
|
||||
def reset_all_gradients(self):
|
||||
self._grads_of_params = dict()
|
||||
self.grad_to_param_mapping = dict()
|
||||
|
||||
def get_param_id_for_grad(self, grad: Tensor) -> Optional[int]:
|
||||
"""Return the id of a parameter which the gradient slice belongs to
|
||||
|
@@ -21,9 +21,11 @@ from colossalai.amp.naive_amp.mixed_precision_mixin import (
|
||||
from colossalai.interface import OptimizerWrapper
|
||||
from colossalai.logging import get_dist_logger
|
||||
from colossalai.quantization.fp8 import all_gather_into_tensor_flat_fp8, all_reduce_fp8, reduce_scatter_fp8
|
||||
from colossalai.tensor.moe_tensor.api import is_moe_tensor
|
||||
|
||||
from ._utils import calculate_global_norm_from_list, has_inf_or_nan, release_param_grad, sync_tensor
|
||||
from .bookkeeping import BucketStore, GradientStore, TensorBucket
|
||||
from .zero_hook import set_all_gather_handle, wait_all_gather_handle
|
||||
|
||||
|
||||
class LowLevelZeroFP16MixedPrecisionMixin(FP16MixedPrecisionMixin):
|
||||
@@ -66,7 +68,7 @@ class LowLevelZeroOptimizer(OptimizerWrapper):
|
||||
def __init__(
|
||||
self,
|
||||
optimizer: Optimizer,
|
||||
pg_to_param_list: Dict[ProcessGroup, List[nn.Parameter]] = None,
|
||||
pg_to_param_list: Optional[Dict[ProcessGroup, List[nn.Parameter]]] = None,
|
||||
initial_scale: int = 2**16, # grad scaler config
|
||||
min_scale: int = 1,
|
||||
growth_factor: float = 2.0,
|
||||
@@ -84,6 +86,7 @@ class LowLevelZeroOptimizer(OptimizerWrapper):
|
||||
dp_process_group: Optional[ProcessGroup] = None,
|
||||
forced_dtype: Optional[torch.dtype] = None,
|
||||
master_weights: bool = True, # master weights
|
||||
overlap_allgather: bool = False,
|
||||
fp8_communication: bool = False,
|
||||
):
|
||||
super(LowLevelZeroOptimizer, self).__init__(optim=optimizer)
|
||||
@@ -92,7 +95,7 @@ class LowLevelZeroOptimizer(OptimizerWrapper):
|
||||
self._logger = get_dist_logger()
|
||||
self._verbose = verbose
|
||||
|
||||
if dp_process_group is not None and pg_to_param_list is not None:
|
||||
if (dp_process_group is not None) and (pg_to_param_list is not None):
|
||||
raise ValueError("dp_process_group and pg_to_param_list should not be provided at the same time.")
|
||||
|
||||
if pg_to_param_list is None:
|
||||
@@ -123,6 +126,7 @@ class LowLevelZeroOptimizer(OptimizerWrapper):
|
||||
|
||||
# communication params
|
||||
self._overlap_communication = overlap_communication
|
||||
self._overlap_allgather = overlap_allgather
|
||||
self._reduce_bucket_size = reduce_bucket_size
|
||||
self._communication_dtype = communication_dtype
|
||||
self._fp8_communication = fp8_communication
|
||||
@@ -148,6 +152,8 @@ class LowLevelZeroOptimizer(OptimizerWrapper):
|
||||
|
||||
# record the padding size of each param
|
||||
self._padding_map = dict()
|
||||
# padded working param is all-gather buffer and it shares the same memory with working param
|
||||
self._working_param_to_padded_working_param = dict()
|
||||
|
||||
# mapping working param and master param
|
||||
self.master_to_working_param = dict()
|
||||
@@ -248,11 +254,12 @@ class LowLevelZeroOptimizer(OptimizerWrapper):
|
||||
with torch.no_grad():
|
||||
if padding_size > 0:
|
||||
padding_param = torch.nn.functional.pad(param.data.view(-1), [0, padding_size])
|
||||
# reset working params' ptr when no master weights
|
||||
if self._master_weights == False:
|
||||
param.data = padding_param[: param.numel()].view(param.shape)
|
||||
# # reset working params' ptr when no master weights
|
||||
# if self._master_weights == False:
|
||||
param.data = padding_param[: param.numel()].view(param.shape)
|
||||
else:
|
||||
padding_param = param.data.view(-1)
|
||||
self._working_param_to_padded_working_param[param] = padding_param
|
||||
|
||||
splited_params = padding_param.split(
|
||||
padding_param.numel() // self.pid_to_bucket_store[id(param)].world_size
|
||||
@@ -261,7 +268,7 @@ class LowLevelZeroOptimizer(OptimizerWrapper):
|
||||
|
||||
# use fp32 when master_weights is True
|
||||
if self._master_weights is True:
|
||||
splited_param_current_rank = splited_params.detach().float().to(device)
|
||||
splited_param_current_rank = splited_params.detach().clone().float().to(device)
|
||||
else:
|
||||
splited_param_current_rank = splited_params
|
||||
|
||||
@@ -338,21 +345,21 @@ class LowLevelZeroOptimizer(OptimizerWrapper):
|
||||
self._update_unpartitoned_grad(bucket_store, grad_in_bucket.values(), flat_grads_per_rank, group_id)
|
||||
else:
|
||||
flat_grads_list = list(flat_grads.split(len(flat_grads) // bucket_store.world_size))
|
||||
recieved_grad = torch.zeros_like(flat_grads_list[0])
|
||||
received_grad = torch.zeros_like(flat_grads_list[0])
|
||||
if self._fp8_communication:
|
||||
reduce_scatter_fp8(
|
||||
recieved_grad,
|
||||
received_grad,
|
||||
flat_grads_list,
|
||||
group=bucket_store.torch_pg,
|
||||
)
|
||||
else:
|
||||
dist.reduce_scatter(recieved_grad, flat_grads_list, group=bucket_store.torch_pg)
|
||||
dist.reduce_scatter(received_grad, flat_grads_list, group=bucket_store.torch_pg)
|
||||
|
||||
if recieved_grad.dtype != grad_dtype:
|
||||
recieved_grad = recieved_grad.to(grad_dtype)
|
||||
if received_grad.dtype != grad_dtype:
|
||||
received_grad = received_grad.to(grad_dtype)
|
||||
|
||||
grad_in_bucket_current_rank = bucket_store.get_grad()[bucket_store.local_rank]
|
||||
self._update_partitoned_grad(bucket_store, grad_in_bucket_current_rank, recieved_grad, group_id, 1)
|
||||
self._update_partitoned_grad(bucket_store, grad_in_bucket_current_rank, received_grad, group_id, 1)
|
||||
|
||||
bucket_store.reset()
|
||||
|
||||
@@ -562,25 +569,29 @@ class LowLevelZeroOptimizer(OptimizerWrapper):
|
||||
working_param = real_working_params[group_id][idx]
|
||||
param_to_gather = master_param.to(device).to(self._dtype)
|
||||
pg = self.param_to_pg[working_param]
|
||||
if param_to_gather.numel() > self.pg_to_tensor_bucket[pg].max_size:
|
||||
buffer_tensor = torch.empty_like(
|
||||
torch.cat([param_to_gather for _ in range(dist.get_world_size(pg))])
|
||||
)
|
||||
if self._fp8_communication:
|
||||
all_gather_into_tensor_flat_fp8(buffer_tensor, param_to_gather, pg, fp8_format="e4m3")
|
||||
else:
|
||||
dist.all_gather_into_tensor(buffer_tensor, param_to_gather, pg)
|
||||
working_param.data.copy_(buffer_tensor[: working_param.numel()].reshape_as(working_param))
|
||||
continue
|
||||
try:
|
||||
self.pg_to_tensor_bucket[pg].add_to_bucket(param_to_gather, write_back_tensor=working_param)
|
||||
except RuntimeError:
|
||||
self.pg_to_tensor_bucket[pg].all_gather(pg, fp8_communication=self._fp8_communication)
|
||||
self.pg_to_tensor_bucket[pg].add_to_bucket(param_to_gather, write_back_tensor=working_param)
|
||||
padded_working_param = self._working_param_to_padded_working_param[working_param]
|
||||
if self._overlap_allgather:
|
||||
handle = dist.all_gather_into_tensor(padded_working_param, param_to_gather, pg, async_op=True)
|
||||
set_all_gather_handle(working_param, handle)
|
||||
else:
|
||||
if param_to_gather.numel() > self.pg_to_tensor_bucket[pg].max_size:
|
||||
if self._fp8_communication:
|
||||
all_gather_into_tensor_flat_fp8(
|
||||
padded_working_param, param_to_gather, pg, fp8_format="e4m3"
|
||||
)
|
||||
else:
|
||||
dist.all_gather_into_tensor(padded_working_param, param_to_gather, pg)
|
||||
continue
|
||||
try:
|
||||
self.pg_to_tensor_bucket[pg].add_to_bucket(param_to_gather, write_back_tensor=working_param)
|
||||
except RuntimeError:
|
||||
self.pg_to_tensor_bucket[pg].all_gather(pg, fp8_communication=self._fp8_communication)
|
||||
self.pg_to_tensor_bucket[pg].add_to_bucket(param_to_gather, write_back_tensor=working_param)
|
||||
self.optim.param_groups[group_id]["params"] = self._master_param_groups_of_current_rank[group_id]
|
||||
for pg, tensor_bucket in self.pg_to_tensor_bucket.items():
|
||||
if not tensor_bucket.is_empty():
|
||||
tensor_bucket.all_gather(pg, fp8_communication=self._fp8_communication)
|
||||
if not self._overlap_allgather:
|
||||
for pg, tensor_bucket in self.pg_to_tensor_bucket.items():
|
||||
if not tensor_bucket.is_empty():
|
||||
tensor_bucket.all_gather(pg, fp8_communication=self._fp8_communication)
|
||||
|
||||
def _compute_grad_norm(self, dp_pg: ProcessGroup, gradients: List[Tensor], norm_type: int = 2) -> float:
|
||||
r"""
|
||||
@@ -657,6 +668,11 @@ class LowLevelZeroOptimizer(OptimizerWrapper):
|
||||
for group_id in range(self.num_param_groups):
|
||||
param_group = self._working_param_groups[group_id]
|
||||
for param in param_group:
|
||||
if is_moe_tensor(param) and param.requires_grad and param.grad is None:
|
||||
# TODO better of of doing this
|
||||
# assign zero grad to unrouted expert to avoid hang during grad reduction
|
||||
param.grad = torch.zeros_like(param)
|
||||
|
||||
if param.requires_grad and param.grad is not None:
|
||||
self._add_to_bucket(param, group_id)
|
||||
|
||||
@@ -815,8 +831,8 @@ class LowLevelZeroOptimizer(OptimizerWrapper):
|
||||
"""
|
||||
for p in model.parameters():
|
||||
p_id = id(p)
|
||||
pg = self.param_to_pg[p]
|
||||
if p_id in self.working_to_master_param:
|
||||
pg = self.param_to_pg[p]
|
||||
master_param = self.working_to_master_param[p_id]
|
||||
padding_size = self.get_param_padding_size(p)
|
||||
working_param = p.data.view(-1)
|
||||
@@ -877,13 +893,12 @@ class LowLevelZeroOptimizer(OptimizerWrapper):
|
||||
|
||||
def get_param_grad(self, working_param: nn.Parameter) -> Tensor:
|
||||
grad_store = self.pid_to_grad_store[id(working_param)]
|
||||
partial_grad = grad_store.get_working_grad_by_param_id(id(working_param))
|
||||
if partial_grad is None:
|
||||
grad = grad_store.get_working_grad_by_param_id(id(working_param))
|
||||
if grad is None:
|
||||
return None
|
||||
tensor_list = [torch.empty_like(partial_grad) for _ in range(grad_store.world_size)]
|
||||
dist.all_gather(tensor_list, partial_grad, group=grad_store.torch_pg)
|
||||
grad_flat = torch.cat(tensor_list, dim=0)
|
||||
return grad_flat[: working_param.numel()].reshape_as(working_param)
|
||||
grad_flat = torch.empty((grad_store.world_size, *grad.shape), dtype=grad.dtype, device=grad.device)
|
||||
dist.all_gather_into_tensor(grad_flat, grad, group=grad_store.torch_pg)
|
||||
return grad_flat.view(-1)[: working_param.numel()].view_as(working_param)
|
||||
|
||||
def get_working_grads_by_group_id(self, group_id: int) -> List[Tensor]:
|
||||
working_grads = []
|
||||
@@ -908,3 +923,7 @@ class LowLevelZeroOptimizer(OptimizerWrapper):
|
||||
def get_partitioned_gradients_by_param_id(self, group_id: int, param_id: int) -> List:
|
||||
grad_store = self.pid_to_grad_store[param_id]
|
||||
return grad_store.get_partitioned_gradients_by_param_id(group_id, param_id)
|
||||
|
||||
def _force_wait_all_gather(self):
|
||||
for param in self._working_param_to_padded_working_param.keys():
|
||||
wait_all_gather_handle(param)
|
||||
|
33
colossalai/zero/low_level/zero_hook.py
Normal file
33
colossalai/zero/low_level/zero_hook.py
Normal file
@@ -0,0 +1,33 @@
|
||||
from typing import List
|
||||
|
||||
from torch._tensor import Tensor
|
||||
|
||||
from colossalai.tensor.param_op_hook import ColoParamOpHook
|
||||
|
||||
_ALL_GATHER_HANDLE = "_all_gather_handle"
|
||||
|
||||
|
||||
def wait_all_gather_handle(p):
|
||||
if hasattr(p, _ALL_GATHER_HANDLE):
|
||||
handle = getattr(p, _ALL_GATHER_HANDLE)
|
||||
handle.wait()
|
||||
delattr(p, _ALL_GATHER_HANDLE)
|
||||
|
||||
|
||||
def set_all_gather_handle(p, handle):
|
||||
setattr(p, _ALL_GATHER_HANDLE, handle)
|
||||
|
||||
|
||||
class ZeroOpHook(ColoParamOpHook):
|
||||
def pre_forward(self, params: List[Tensor]) -> None:
|
||||
for p in params:
|
||||
wait_all_gather_handle(p)
|
||||
|
||||
def post_forward(self, params: List[Tensor]) -> None:
|
||||
pass
|
||||
|
||||
def pre_backward(self, params: List[Tensor]) -> None:
|
||||
pass
|
||||
|
||||
def post_backward(self, params: List[Tensor]) -> None:
|
||||
pass
|
Reference in New Issue
Block a user