Antonio Cheong
commited on
Commit
•
d17239a
1
Parent(s):
27be13e
structure
Browse files
README.md
DELETED
@@ -1,98 +0,0 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
4 |
-
> # Cloned from https://github.com/amazon-science/mm-cot
|
5 |
-
|
6 |
-
# Multimodal Chain-of-Thought Reasoning in Language Models
|
7 |
-
|
8 |
-
<h5 align="center"><i>"Imagine learning a textbook without figures or tables."</i></h5>
|
9 |
-
|
10 |
-
Multimodal-CoT incorporates vision features in a decoupled training framework. The framework consists of two training stages: (i) rationale generation and (ii) answer inference. Both stages share the same model architecture but differ in the input and output.
|
11 |
-
|
12 |
-
![](vision_features/mm-cot.png)
|
13 |
-
|
14 |
-
|
15 |
-
## Requirements
|
16 |
-
|
17 |
-
Install all required python dependencies:
|
18 |
-
|
19 |
-
```
|
20 |
-
pip install -r requirements.txt
|
21 |
-
```
|
22 |
-
|
23 |
-
## Datasets
|
24 |
-
|
25 |
-
Download the dataset from the following repository:
|
26 |
-
|
27 |
-
```
|
28 |
-
https://github.com/lupantech/ScienceQA/tree/main/data
|
29 |
-
```
|
30 |
-
|
31 |
-
Download the extracted vision features from [vision_features](https://drive.google.com/file/d/13B0hc_F_45-UlqPLKSgRz-ALtFQ8kIJr/view?usp=share_link) and unzip the files under `vision_features`
|
32 |
-
|
33 |
-
## Instructions
|
34 |
-
|
35 |
-
### Training
|
36 |
-
|
37 |
-
```
|
38 |
-
# rationale generation
|
39 |
-
CUDA_VISIBLE_DEVICES=0,1 python main.py \
|
40 |
-
--model allenai/unifiedqa-t5-base \
|
41 |
-
--user_msg rationale --img_type detr \
|
42 |
-
--bs 8 --eval_bs 4 --eval_acc 10 --output_len 512 \
|
43 |
-
--final_eval --prompt_format QCM-LE
|
44 |
-
|
45 |
-
# answer inference
|
46 |
-
CUDA_VISIBLE_DEVICES=0,1 python main.py \
|
47 |
-
--model allenai/unifiedqa-t5-base \
|
48 |
-
--user_msg answer --img_type detr \
|
49 |
-
--bs 8 --eval_bs 4 --eval_acc 10 --output_len 64 \
|
50 |
-
--final_eval --prompt_format QCMG-A \
|
51 |
-
--eval_le experiments/rationale_allenai-unifiedqa-t5-base_detr_QCM-LE_lr5e-05_bs16_op512_ep20/predictions_ans_eval.json \
|
52 |
-
--test_le experiments/rationale_allenai-unifiedqa-t5-base_detr_QCM-LE_lr5e-05_bs16_op512_ep20/predictions_ans_test.json
|
53 |
-
```
|
54 |
-
|
55 |
-
### Inference
|
56 |
-
|
57 |
-
Our trained models are available at [models](https://drive.google.com/file/d/1FtTYOJPHnWnFfCxNC6M3gar4RAX5E21b/view?usp=share_link). To use our trained models, please put the them under the ```models``` folder.
|
58 |
-
|
59 |
-
```
|
60 |
-
# rationale generation
|
61 |
-
CUDA_VISIBLE_DEVICES=0,1 python main.py \
|
62 |
-
--model allenai/unifiedqa-t5-base \
|
63 |
-
--user_msg rationale --img_type detr \
|
64 |
-
--bs 8 --eval_bs 4 --eval_acc 10 --output_len 512 \
|
65 |
-
--final_eval --prompt_format QCM-LE \
|
66 |
-
--evaluate_dir models/MM-CoT-UnifiedQA-base-Rationale
|
67 |
-
|
68 |
-
# answer inference
|
69 |
-
CUDA_VISIBLE_DEVICES=0,1 python main.py \
|
70 |
-
--model allenai/unifiedqa-t5-base \
|
71 |
-
--user_msg answer --img_type detr \
|
72 |
-
--bs 8 --eval_bs 4 --eval_acc 10 --output_len 64 \
|
73 |
-
--final_eval --prompt_format QCMG-A \
|
74 |
-
--eval_le models/rationale/predictions_ans_eval.json \
|
75 |
-
--test_le models/rationale/predictions_ans_test.json \
|
76 |
-
--evaluate_dir models/MM-CoT-UnifiedQA-base-Answer
|
77 |
-
```
|
78 |
-
|
79 |
-
## Citing MM-CoT
|
80 |
-
|
81 |
-
```
|
82 |
-
@article{zhang2023multicot,
|
83 |
-
title={Multimodal Chain-of-Thought Reasoning in Language Models},
|
84 |
-
author={Zhang, Zhuosheng and Zhang, Aston and Li, Mu and Zhao, Hai and Karypis, George and Smola, Alex},
|
85 |
-
journal={arXiv preprint arXiv:2302.00923},
|
86 |
-
year={2023}
|
87 |
-
}
|
88 |
-
```
|
89 |
-
|
90 |
-
## License
|
91 |
-
|
92 |
-
This project is licensed under the Apache-2.0 License.
|
93 |
-
|
94 |
-
## Acknowledgement
|
95 |
-
|
96 |
-
Part of our codes are adapted from [ScienceQA](https://github.com/lupantech/ScienceQA) and [Transformers](https://github.com/huggingface/transformers).
|
97 |
-
|
98 |
-
We thank Pan Lu for providing parameter size for ScienceQA baselines.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|