[pipeline] Llama pipeline (#4205)

* bloom policy

* llama pipeline forward and tests

* fix the output and attention_mask

* fix name

* bind argument to policy

* Revert "bloom policy"

This reverts commit 8dee68a0a2.

This policy should be revert and copied to feature/bloom

* revert the bloom changes

* cancel unneeded inputs

* gpt
This commit is contained in:
Jianghai
2023-07-11 11:37:26 +08:00
committed by Hongxin Liu
parent 1094e0f0d3
commit 1622031058
6 changed files with 516 additions and 4 deletions

View File

@@ -193,7 +193,7 @@ class BertModelPolicy(BertPolicy):
module = self.model
stage_manager = self.pipeline_stage_manager
held_layers = []
layers_per_stage = self.distribute_layers(len(self.model.encoder.layer), stage_manager.num_stages)
layers_per_stage = self.distribute_layers(len(module.encoder.layer), stage_manager.num_stages)
if stage_manager.is_first_stage():
held_layers.append(module.embeddings)
start_idx, end_idx = self.get_stage_index(layers_per_stage, stage_manager.stage)