1
0
mirror of https://github.com/hpcaitech/ColossalAI.git synced 2025-05-02 05:35:29 +00:00
ColossalAI/colossalai/legacy/inference/quant/smoothquant/models/__init__.py
Xu Kai fd6482ad8c
[inference] Refactor inference architecture ()
* [inference] support only TP ()

* support only tp

* enable tp

* add support for bloom ()

* [refactor] refactor gptq and smoothquant llama ()

* refactor gptq and smoothquant llama

* fix import error

* fix linear import torch-int

* fix smoothquant llama import error

* fix import accelerate error

* fix bug

* fix import smooth cuda

* fix smoothcuda

* [Inference Refactor] Merge chatglm2 with pp and tp ()

merge chatglm with pp and tp

* [Refactor] remove useless inference code ()

* remove useless code

* fix quant model

* fix test import bug

* mv original inference legacy

* fix chatglm2

* [Refactor] refactor policy search and quant type controlling in inference ()

* [Refactor] refactor policy search and quant type controling in inference

* [inference] update readme ()

* update readme

* update readme

* fix architecture

* fix table

* fix table

* [inference] udpate example ()

* udpate example

* fix run.sh

* fix rebase bug

* fix some errors

* update readme

* add some features

* update interface

* update readme

* update benchmark

* add requirements-infer

---------

Co-authored-by: Bin Jia <45593998+FoolPlayer@users.noreply.github.com>
Co-authored-by: Zhongkai Zhao <kanezz620@gmail.com>
2023-11-19 21:05:05 +08:00

13 lines
323 B
Python

try:
import torch_int
HAS_TORCH_INT = True
except ImportError:
HAS_TORCH_INT = False
raise ImportError(
"Not install torch_int. Please install torch_int from https://github.com/Guangxuan-Xiao/torch-int"
)
if HAS_TORCH_INT:
from .llama import LLamaSmoothquantAttention, LlamaSmoothquantMLP