[Inference] Fix bug in ChatGLM2 Tensor Parallelism (#5014)

* fix bug

* fix

* fix multiquery

* fix multiquery

---------

Co-authored-by: CjhHa1 <cjh18671720497outlook.com>
This commit is contained in:
Jianghai
2023-11-07 15:01:50 +08:00
committed by GitHub
parent c36e782d80
commit ef4c14a5e2
8 changed files with 21 additions and 19 deletions

View File

@@ -400,7 +400,6 @@ class SelfAttention(torch.nn.Module):
)
self.core_attention = CoreAttention(config, self.layer_number)
# Output.
self.dense = nn.Linear(
self.projection_size,