mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-06-21 21:22:04 +00:00
* Add dpo. Fix sft, ppo, lora. Refactor all * fix and tested ppo * 2 nd round refactor * add ci tests * fix ci * fix ci * fix readme, style * fix readme style * fix style, fix benchmark * reproduce benchmark result, remove useless files * rename to ColossalChat * use new image * fix ci workflow * fix ci * use local model/tokenizer for ci tests * fix ci * fix ci * fix ci * fix ci timeout * fix rm progress bar. fix ci timeout * fix ci * fix ci typo * remove 3d plugin from ci temporary * test environment * cannot save optimizer * support chat template * fix readme * fix path * test ci locally * restore build_or_pr * fix ci data path * fix benchmark * fix ci, move ci tests to 3080, disable fast tokenizer * move ci to 85 * support flash attention 2 * add all-in-one data preparation script. Fix colossal-llama2-chat chat template * add hardware requirements * move ci test data * fix save_model, add unwrap * fix missing bos * fix missing bos; support grad accumulation with gemini * fix ci * fix ci * fix ci * fix llama2 chat template config * debug sft * debug sft * fix colossalai version requirement * fix ci * add sanity check to prevent NaN loss * fix requirements * add dummy data generation script * add dummy data generation script * add dummy data generation script * add dummy data generation script * update readme * update readme * update readme and ignore * fix logger bug * support parallel_output * modify data preparation logic * fix tokenization * update lr * fix inference * run pre-commit --------- Co-authored-by: Tong Li <tong.li352711588@gmail.com>
28 lines
811 B
Python
Executable File
28 lines
811 B
Python
Executable File
from contextlib import contextmanager
|
|
|
|
import torch
|
|
|
|
|
|
def _noop(*args, **kwargs):
|
|
pass
|
|
|
|
|
|
@contextmanager
|
|
def low_resource_init():
|
|
"""This context manager disables weight initialization and sets the default float dtype to half."""
|
|
old_kaiming_uniform_ = torch.nn.init.kaiming_uniform_
|
|
old_uniform_ = torch.nn.init.uniform_
|
|
old_normal_ = torch.nn.init.normal_
|
|
dtype = torch.get_default_dtype()
|
|
try:
|
|
torch.nn.init.kaiming_uniform_ = _noop
|
|
torch.nn.init.uniform_ = _noop
|
|
torch.nn.init.normal_ = _noop
|
|
torch.set_default_dtype(torch.half)
|
|
yield
|
|
finally:
|
|
torch.nn.init.kaiming_uniform_ = old_kaiming_uniform_
|
|
torch.nn.init.uniform_ = old_uniform_
|
|
torch.nn.init.normal_ = old_normal_
|
|
torch.set_default_dtype(dtype)
|