mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-09-06 19:40:28 +00:00
[Device]Support npu (#6159)
* support npu * support pretrain support pretrain fix * support lora fix fix * support chatglm fix fxi fix [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci fix fix [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci fix [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci fix fix fix * Update train.py * Update train.py * fix * fix * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix * fix * fix * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
This commit is contained in:
@@ -62,7 +62,7 @@ class GeminiZeROHook(ColoParamOpHook):
|
||||
#
|
||||
# Other than that, self._gemini_manager.wait_chunks will have synced with default stream
|
||||
# by calling dist.Work.wait() and this line makes no diff.
|
||||
self._gemini_manager.chunk_manager._prefetch_stream.wait_stream(torch.cuda.current_stream())
|
||||
self._gemini_manager.chunk_manager._prefetch_stream.wait_stream(get_accelerator().current_stream())
|
||||
|
||||
with get_accelerator().stream(self._gemini_manager.chunk_manager._prefetch_stream):
|
||||
for chunk in chunks_fetch_async:
|
||||
|
Reference in New Issue
Block a user