Very time-consuming, GLM-4v-9B only takes 5 seconds to process, GLM-4v-9B-gptq-4bit takes 90 seconds

#2
by dafen - opened

table:
test1.png

table:
test1.png

Please provide your GPU and CUDA version. Additionally, have you tried loading the model using a different method?There might be differences when using AutoGPTQ and Hugging Face Transformers.

Leave a comment

PYTHON:py310
GPU:A100 80GB
CUDA:cu121
AutoGPTQ:0.7.1
Transformers:4.44.2

import os
import json
import random
import time
from PIL import Image
import torch
import datasets
from transformers import AutoTokenizer, AutoModelForCausalLM
from auto_gptq import AutoGPTQForCausalLM
from auto_gptq.modeling._base import BaseGPTQForCausalLM
from auto_gptq.modeling._const import SUPPORTED_MODELS
from auto_gptq.modeling.auto import GPTQ_CAUSAL_LM_MODEL_MAP

class ChatGLMGPTQForCausalLM(BaseGPTQForCausalLM):
    layer_type = ["GLMBlock", "TransformerLayer", "GLU"]

    layers_block_names = ["transformer.encoder.layers", 
                            "transformer.vision.transformer.layers", 
                            "transformer.vision.linear_proj"]
        
    outside_layer_modules = ["transformer.output_layer"]
    
    inside_layer_modules = [
        ["self_attention.query_key_value", "self_attention.dense", "mlp.dense_h_to_4h", "mlp.dense_4h_to_h"],
        ["attention.query_key_value", "attention.dense", "mlp.fc1", "mlp.fc2"],
        ["linear_proj", "dense_h_to_4h", "gate_proj", "dense_4h_to_h"],
    ]

GPTQ_CAUSAL_LM_MODEL_MAP['chatglm'] = ChatGLMGPTQForCausalLM
SUPPORTED_MODELS = SUPPORTED_MODELS.append('chatglm')

device = 'cuda:0'
quantized_model_dir = 'alexwww94/glm-4v-9b-gptq-4bit'
trust_remote_code = True

tokenizer = AutoTokenizer.from_pretrained(
    quantized_model_dir,
    trust_remote_code=trust_remote_code,
)

model = AutoGPTQForCausalLM.from_quantized(
    quantized_model_dir,
    device=device,
    trust_remote_code=trust_remote_code,
    torch_dtype=torch.float16,
    use_cache=True,
    inject_fused_mlp=True,
    inject_fused_attention=True,
)


image = Image.open('table.png').convert('RGB')
msgs = [
    {"role": "user", "image": image, "content": 'Convert image content to markdown:'}
]

inputs = tokenizer.apply_chat_template(msgs,
                                add_generation_prompt=True, tokenize=True, return_tensors="pt",
                                return_dict=True, dtyp=torch.bfloat16)  # chat mode
inputs = inputs.to(device)
inputs['images'] = inputs['images'].half()

gen_kwargs = {"max_length": 2500}
with torch.inference_mode():
    outputs = model.generate(**inputs, **gen_kwargs)
    outputs = outputs[:, inputs['input_ids'].shape[1]:]
    generated_text = tokenizer.decode(outputs[0]).split('<|endoftext|>')[0]

The issue might not lie with the quantized weight files; it is likely due to the inference library. If there are warnings related to CUDA and auto-gptq during inference, it could be because auto-gptq wasn't compiled correctly. It's recommended to install it from the source. For more information, refer to this issue (https://github.com/AutoGPTQ/AutoGPTQ/issues/694)

Can you help provide your CUDA version and AutoGPTQ version? I saw that it may be a problem with the AutoGPTQ version mentioned on issues. Thank you very much

Can you help provide your CUDA version and AutoGPTQ version? I saw that it may be a problem with the AutoGPTQ version mentioned on issues. Thank you very much

CUDA version is the same as yours.

Compiling from source code and re-installing AutoGPTQ might be helpful for you.

git clone https://github.com/AutoGPTQ/AutoGPTQ.git &&\
    cd AutoGPTQ &&\
    pip install -vvv --no-build-isolation -e .

Thanks a lot! it works! it runs faster.

dafen changed discussion status to closed

Sign up or log in to comment