mirror of
https://github.com/hpcaitech/ColossalAI.git
synced 2025-09-15 22:19:38 +00:00
[nfc] fix some typo with colossalai/ docs/ etc. (#4920)
This commit is contained in:
@@ -94,7 +94,7 @@ For various models, experiments were conducted using multiple batch sizes under
|
||||
|
||||
### Single GPU Performance:
|
||||
|
||||
Currently the stats below are calculated based on A100 (single GPU), and we calculate token latency based on average values of context-forward and decoding forward process, which means we combine both of processes to calculate token generation times. We are actively developing new features and methods to furthur optimize the performance of LLM models. Please stay tuned.
|
||||
Currently the stats below are calculated based on A100 (single GPU), and we calculate token latency based on average values of context-forward and decoding forward process, which means we combine both of processes to calculate token generation times. We are actively developing new features and methods to further optimize the performance of LLM models. Please stay tuned.
|
||||
|
||||
#### Llama
|
||||
|
||||
|
Reference in New Issue
Block a user