mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-09-16 22:52:25 +00:00
[pipeline/pytree] add pytree to process args and kwargs | provide data_process_func
to process args and kwargs after forward (#1642)
* [pipeline/tuning] improve dispatch performance both time and space cost * [pipeline/converge] add interface for testing convergence * [NFC] polish colossalai/utils/multi_tensor_apply/multi_tensor_apply.py code style * Update PipelineBase.py * [pipeline/chimera] reconstruct PipelineBase and Worker to support more feasible custom schedule | finish Chimera * [pipeline/chimera] test chimera | fix bug of initializing * [pipeline/pytree] add pytree to process args and kwargs | provide to process args and kwargs after forward
This commit is contained in:
@@ -22,10 +22,9 @@ def run_master(args):
|
||||
|
||||
epoch = args.epoch
|
||||
device = args.device
|
||||
stage_num = 4
|
||||
stage_num = args.world_size
|
||||
chunk = 1
|
||||
num_microbatches = 4
|
||||
actual_stage_num = 4
|
||||
num_microbatches = args.num_microbatches
|
||||
use_checkpoint = False
|
||||
|
||||
sample_num = 1024
|
||||
@@ -78,6 +77,4 @@ def run_master(args):
|
||||
|
||||
if __name__ == "__main__":
|
||||
args = parse_args()
|
||||
args.world_size = 4
|
||||
args.num_microbatches = 4
|
||||
rpc_run(args, run_master)
|
||||
|
Reference in New Issue
Block a user