Update README.md
Browse files
README.md
CHANGED
@@ -1,19 +1,33 @@
|
|
1 |
---
|
2 |
tags:
|
3 |
-
- awq
|
4 |
-
- llm
|
5 |
-
- quantization
|
6 |
---
|
7 |
|
8 |
# yujiepan/awq-model-zoo
|
9 |
|
10 |
Here are some pre-computed awq information (scales & clips) used in [llm-awq](https://github.com/mit-han-lab/llm-awq).
|
11 |
|
12 |
-
|
13 |
## Scripts
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
## Related links
|
19 |
-
|
|
|
|
1 |
---
|
2 |
tags:
|
3 |
+
- awq
|
4 |
+
- llm
|
5 |
+
- quantization
|
6 |
---
|
7 |
|
8 |
# yujiepan/awq-model-zoo
|
9 |
|
10 |
Here are some pre-computed awq information (scales & clips) used in [llm-awq](https://github.com/mit-han-lab/llm-awq).
|
11 |
|
|
|
12 |
## Scripts
|
13 |
|
14 |
+
- Install the forked `llm-awq` at [https://github.com/yujiepan-work/llm-awq/tree/a41a08e79d8eb3d6335485b3625410af22a74426](https://github.com/yujiepan-work/llm-awq/tree/a41a08e79d8eb3d6335485b3625410af22a74426). Note: works with transformers==4.35.2
|
15 |
+
|
16 |
+
- Generating awq-info.pt:
|
17 |
+
|
18 |
+
```bash
|
19 |
+
python do_awq.py --model_id mistralai/Mistral-7B-v0.1 --w_bit 8 --q_group_size 128 --dump_awq ./awq-info.pt
|
20 |
+
```
|
21 |
|
22 |
+
- Load a quantized model: You can use the offical repo to get a fake/real quantized model. Alternatively, you can load a fake-quantized model:
|
23 |
+
|
24 |
+
```python
|
25 |
+
from do_awq import FakeAWQModel
|
26 |
+
FakeAWQModel.from_pretrained('mistralai/Mistral-7B-v0.1', awq_meta_path='./awq-info.pt', output_folder='./tmp/')
|
27 |
+
```
|
28 |
+
|
29 |
+
Note: the code is not in good shape.
|
30 |
|
31 |
## Related links
|
32 |
+
|
33 |
+
- <https://huggingface.co/datasets/mit-han-lab/awq-model-zoo>
|