mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-06-13 17:36:52 +00:00
* Detached ppo (#9) * run the base * working on dist ppo * sync * detached trainer * update detached trainer. no maker update function * facing init problem * 1 maker 1 trainer detached run. but no model update * facing cuda problem * fix save functions * verified maker update * nothing * add ignore * analyize loss issue * remove some debug codes * facing 2m1t stuck issue * 2m1t verified * do not use torchrun * working on 2m2t * working on 2m2t * initialize strategy in ray actor env * facing actor's init order issue * facing ddp model update issue (need unwarp ddp) * unwrap ddp actor * checking 1m2t stuck problem * nothing * set timeout for trainer choosing. It solves the stuck problem! * delete some debug output * rename to sync with upstream * rename to sync with upstream * coati rename * nothing * I am going to detach the replaybuffer from trainer and make it a Ray Actor. Two benefits: 1. support TP trainer. 2. asynchronized buffer operations * experience_maker_holder performs target-revolving _send_experience() instead of length comparison. * move code to ray subfolder * working on pipeline inference * apply comments * working on pipeline strategy. in progress. * remove pipeline code. clean this branch * update remote parameters by state_dict. no test * nothing * state_dict sharding transfer * merge debug branch * gemini _unwrap_model fix * simplify code * simplify code & fix LoRALinear AttributeError * critic unwrapped state_dict --------- Co-authored-by: csric <richcsr256@gmail.com> * [chat] add perfomance evaluator and fix bugs (#10) * [chat] add performance evaluator for ray * [chat] refactor debug arg * [chat] support hf config * [chat] fix generation * [chat] add 1mmt dummy example * [chat] fix gemini ckpt * split experience to send (#11) Co-authored-by: csric <richcsr256@gmail.com> * [chat] refactor trainer and maker (#12) * [chat] refactor experience maker holder * [chat] refactor model init * [chat] refactor trainer args * [chat] refactor model init * [chat] refactor trainer * [chat] refactor experience sending logic and training loop args (#13) * [chat] refactor experience send logic * [chat] refactor trainer * [chat] refactor trainer * [chat] refactor experience maker * [chat] refactor pbar * [chat] refactor example folder (#14) * [chat] support quant (#15) * [chat] add quant * [chat] add quant example * prompt example (#16) * prompt example * prompt load csv data * remove legacy try --------- Co-authored-by: csric <richcsr256@gmail.com> * [chat] add mmmt dummy example and refactor experience sending (#17) * [chat] add mmmt dummy example * [chat] refactor naive strategy * [chat] fix struck problem * [chat] fix naive strategy * [chat] optimize experience maker sending logic * [chat] refactor sending assignment * [chat] refactor performance evaluator (#18) * Prompt Example & requires_grad state_dict & sharding state_dict (#19) * prompt example * prompt load csv data * remove legacy try * maker models require_grad set to False * working on zero redundancy update * mmmt_prompt example; naive strategy requires_grad state_dict & sharding; maker model requires_no_grad. * remove legacy examples * remove legacy examples * remove replay buffer tp state. bad design --------- Co-authored-by: csric <richcsr256@gmail.com> * state_dict sending adapts to new unwrap function (#20) * prompt example * prompt load csv data * remove legacy try * maker models require_grad set to False * working on zero redundancy update * mmmt_prompt example; naive strategy requires_grad state_dict & sharding; maker model requires_no_grad. * remove legacy examples * remove legacy examples * remove replay buffer tp state. bad design * opt benchmark * better script * nothing * [chat] strategy refactor unwrap model * [chat] strategy refactor save model * [chat] add docstr * [chat] refactor trainer save model * [chat] fix strategy typing * [chat] refactor trainer save model * [chat] update readme * [chat] fix unit test * working on lora reconstruction * state_dict sending adapts to new unwrap function * remove comments --------- Co-authored-by: csric <richcsr256@gmail.com> Co-authored-by: ver217 <lhx0217@gmail.com> * [chat-ray] add readme (#21) * add readme * transparent graph * add note background --------- Co-authored-by: csric <richcsr256@gmail.com> * [chat] get images from url (#22) * Refactor/chat ray (#23) * [chat] lora add todo * [chat] remove unused pipeline strategy * [chat] refactor example structure * [chat] setup ci for ray * [chat-ray] Support LoRA trainer. LoRA weights reconstruction. (#24) * lora support prototype * lora support * 1mmt lora & remove useless code --------- Co-authored-by: csric <richcsr256@gmail.com> * [chat] fix test ci for ray * [chat] fix test ci requirements for ray * [chat] fix ray runtime env * [chat] fix ray runtime env * [chat] fix example ci docker args * [chat] add debug info in trainer * [chat] add nccl debug info * [chat] skip ray test * [doc] fix typo --------- Co-authored-by: csric <59389055+CsRic@users.noreply.github.com> Co-authored-by: csric <richcsr256@gmail.com>
76 lines
2.6 KiB
Python
76 lines
2.6 KiB
Python
import asyncio
|
|
import copy
|
|
import random
|
|
from threading import Lock
|
|
from typing import Any, List
|
|
|
|
import ray
|
|
import torch
|
|
from coati.experience_maker.base import Experience
|
|
from coati.replay_buffer import ReplayBuffer
|
|
from coati.replay_buffer.utils import BufferItem, make_experience_batch, split_experience_batch
|
|
# from torch.multiprocessing import Queue
|
|
from ray.util.queue import Queue
|
|
|
|
|
|
class DetachedReplayBuffer:
|
|
'''
|
|
Detached replay buffer. Share Experience across workers on the same node.
|
|
Therefore a trainer node is expected to have only one instance.
|
|
It is ExperienceMakerHolder's duty to call append(exp) method, remotely.
|
|
|
|
Args:
|
|
sample_batch_size: Batch size when sampling. Exp won't enqueue until they formed a batch.
|
|
tp_world_size: Number of workers in the same tp group
|
|
limit: Limit of number of experience sample BATCHs. A number <= 0 means unlimited. Defaults to 0.
|
|
cpu_offload: Whether to offload experience to cpu when sampling. Defaults to True.
|
|
'''
|
|
|
|
def __init__(self, sample_batch_size: int, limit: int = 0) -> None:
|
|
self.sample_batch_size = sample_batch_size
|
|
self.limit = limit
|
|
self.items = Queue(self.limit, actor_options={"num_cpus": 1})
|
|
self.batch_collector: List[BufferItem] = []
|
|
|
|
@torch.no_grad()
|
|
def append(self, experience: Experience) -> None:
|
|
'''
|
|
Expected to be called remotely.
|
|
'''
|
|
items = split_experience_batch(experience)
|
|
self.extend(items)
|
|
|
|
@torch.no_grad()
|
|
def extend(self, items: List[BufferItem]) -> None:
|
|
'''
|
|
Expected to be called remotely.
|
|
'''
|
|
self.batch_collector.extend(items)
|
|
while len(self.batch_collector) >= self.sample_batch_size:
|
|
items = self.batch_collector[:self.sample_batch_size]
|
|
experience = make_experience_batch(items)
|
|
self.items.put(experience, block=True)
|
|
self.batch_collector = self.batch_collector[self.sample_batch_size:]
|
|
|
|
def clear(self) -> None:
|
|
# self.items.close()
|
|
self.items.shutdown()
|
|
self.items = Queue(self.limit)
|
|
self.worker_state = [False] * self.tp_world_size
|
|
self.batch_collector = []
|
|
|
|
@torch.no_grad()
|
|
def sample(self, worker_rank=0, to_device="cpu") -> Experience:
|
|
ret = self._sample_and_erase()
|
|
ret.to_device(to_device)
|
|
return ret
|
|
|
|
@torch.no_grad()
|
|
def _sample_and_erase(self) -> Experience:
|
|
ret = self.items.get(block=True)
|
|
return ret
|
|
|
|
def get_length(self) -> int:
|
|
ret = self.items.qsize()
|
|
return ret
|