Antonio Cheong
commited on
Commit
•
2f60e94
1
Parent(s):
d17239a
structure
Browse files- .gitmodules +3 -0
- README.md +95 -0
- amazon-cot +1 -0
.gitmodules
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
[submodule "amazon-cot"]
|
2 |
+
path = amazon-cot
|
3 |
+
url = https://github.com/amazon-science/mm-cot
|
README.md
ADDED
@@ -0,0 +1,95 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
> # Cloned from https://github.com/amazon-science/mm-cot
|
2 |
+
|
3 |
+
# Multimodal Chain-of-Thought Reasoning in Language Models
|
4 |
+
|
5 |
+
<h5 align="center"><i>"Imagine learning a textbook without figures or tables."</i></h5>
|
6 |
+
|
7 |
+
Multimodal-CoT incorporates vision features in a decoupled training framework. The framework consists of two training stages: (i) rationale generation and (ii) answer inference. Both stages share the same model architecture but differ in the input and output.
|
8 |
+
|
9 |
+
![](vision_features/mm-cot.png)
|
10 |
+
|
11 |
+
|
12 |
+
## Requirements
|
13 |
+
|
14 |
+
Install all required python dependencies:
|
15 |
+
|
16 |
+
```
|
17 |
+
pip install -r requirements.txt
|
18 |
+
```
|
19 |
+
|
20 |
+
## Datasets
|
21 |
+
|
22 |
+
Download the dataset from the following repository:
|
23 |
+
|
24 |
+
```
|
25 |
+
https://github.com/lupantech/ScienceQA/tree/main/data
|
26 |
+
```
|
27 |
+
|
28 |
+
Download the extracted vision features from [vision_features](https://drive.google.com/file/d/13B0hc_F_45-UlqPLKSgRz-ALtFQ8kIJr/view?usp=share_link) and unzip the files under `vision_features`
|
29 |
+
|
30 |
+
## Instructions
|
31 |
+
|
32 |
+
### Training
|
33 |
+
|
34 |
+
```
|
35 |
+
# rationale generation
|
36 |
+
CUDA_VISIBLE_DEVICES=0,1 python main.py \
|
37 |
+
--model allenai/unifiedqa-t5-base \
|
38 |
+
--user_msg rationale --img_type detr \
|
39 |
+
--bs 8 --eval_bs 4 --eval_acc 10 --output_len 512 \
|
40 |
+
--final_eval --prompt_format QCM-LE
|
41 |
+
|
42 |
+
# answer inference
|
43 |
+
CUDA_VISIBLE_DEVICES=0,1 python main.py \
|
44 |
+
--model allenai/unifiedqa-t5-base \
|
45 |
+
--user_msg answer --img_type detr \
|
46 |
+
--bs 8 --eval_bs 4 --eval_acc 10 --output_len 64 \
|
47 |
+
--final_eval --prompt_format QCMG-A \
|
48 |
+
--eval_le experiments/rationale_allenai-unifiedqa-t5-base_detr_QCM-LE_lr5e-05_bs16_op512_ep20/predictions_ans_eval.json \
|
49 |
+
--test_le experiments/rationale_allenai-unifiedqa-t5-base_detr_QCM-LE_lr5e-05_bs16_op512_ep20/predictions_ans_test.json
|
50 |
+
```
|
51 |
+
|
52 |
+
### Inference
|
53 |
+
|
54 |
+
Our trained models are available at [models](https://drive.google.com/file/d/1FtTYOJPHnWnFfCxNC6M3gar4RAX5E21b/view?usp=share_link). To use our trained models, please put the them under the ```models``` folder.
|
55 |
+
|
56 |
+
```
|
57 |
+
# rationale generation
|
58 |
+
CUDA_VISIBLE_DEVICES=0,1 python main.py \
|
59 |
+
--model allenai/unifiedqa-t5-base \
|
60 |
+
--user_msg rationale --img_type detr \
|
61 |
+
--bs 8 --eval_bs 4 --eval_acc 10 --output_len 512 \
|
62 |
+
--final_eval --prompt_format QCM-LE \
|
63 |
+
--evaluate_dir models/MM-CoT-UnifiedQA-base-Rationale
|
64 |
+
|
65 |
+
# answer inference
|
66 |
+
CUDA_VISIBLE_DEVICES=0,1 python main.py \
|
67 |
+
--model allenai/unifiedqa-t5-base \
|
68 |
+
--user_msg answer --img_type detr \
|
69 |
+
--bs 8 --eval_bs 4 --eval_acc 10 --output_len 64 \
|
70 |
+
--final_eval --prompt_format QCMG-A \
|
71 |
+
--eval_le models/rationale/predictions_ans_eval.json \
|
72 |
+
--test_le models/rationale/predictions_ans_test.json \
|
73 |
+
--evaluate_dir models/MM-CoT-UnifiedQA-base-Answer
|
74 |
+
```
|
75 |
+
|
76 |
+
## Citing MM-CoT
|
77 |
+
|
78 |
+
```
|
79 |
+
@article{zhang2023multicot,
|
80 |
+
title={Multimodal Chain-of-Thought Reasoning in Language Models},
|
81 |
+
author={Zhang, Zhuosheng and Zhang, Aston and Li, Mu and Zhao, Hai and Karypis, George and Smola, Alex},
|
82 |
+
journal={arXiv preprint arXiv:2302.00923},
|
83 |
+
year={2023}
|
84 |
+
}
|
85 |
+
```
|
86 |
+
|
87 |
+
## License
|
88 |
+
|
89 |
+
This project is licensed under the Apache-2.0 License.
|
90 |
+
|
91 |
+
## Acknowledgement
|
92 |
+
|
93 |
+
Part of our codes are adapted from [ScienceQA](https://github.com/lupantech/ScienceQA) and [Transformers](https://github.com/huggingface/transformers).
|
94 |
+
|
95 |
+
We thank Pan Lu for providing parameter size for ScienceQA baselines.
|
amazon-cot
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Subproject commit 9f6106d12c2f4d07c49f5524b332b89704df1277
|