--- tags: - awq - llm - quantization --- # yujiepan/awq-model-zoo Here are some pre-computed awq information (scales & clips) used in [llm-awq](https://github.com/mit-han-lab/llm-awq). ## Scripts - Install the forked `llm-awq` at [https://github.com/yujiepan-work/llm-awq/tree/a41a08e79d8eb3d6335485b3625410af22a74426](https://github.com/yujiepan-work/llm-awq/tree/a41a08e79d8eb3d6335485b3625410af22a74426). Note: works with transformers==4.35.2 - Generating awq-info.pt: ```bash python do_awq.py --model_id mistralai/Mistral-7B-v0.1 --w_bit 8 --q_group_size 128 --dump_awq ./awq-info.pt ``` - Load a quantized model: You can use the offical repo to get a fake/real quantized model. Alternatively, you can load a fake-quantized model: ```python from do_awq import FakeAWQModel FakeAWQModel.from_pretrained('mistralai/Mistral-7B-v0.1', awq_meta_path='./awq-info.pt', output_folder='./tmp/') ``` Note: the code is not in good shape. ## Related links -