[Inference] User Experience: update the logic of default tokenizer and generation config. (#5337)

* add

* fix

* fix

* pause

* fix

* fix pytest

* align

* fix

* license

* fix

* fix

* fix readme

* fix some bugs

* remove tokenizer config
This commit is contained in:
Jianghai
2024-02-07 17:55:48 +08:00
committed by GitHub
parent 6fb4bcbb24
commit 1f8c7e7046
7 changed files with 62 additions and 23 deletions

View File

@@ -72,7 +72,6 @@ def llama_model_forward(
"""
input_ids = batch.get_1D_inputs()
block_tables = batch.get_block_table_tensor()
sequence_lengths = batch.get_sequence_lengths()
batch_size = len(sequence_lengths)
kv_seq_len = sequence_lengths.max().item()