czczup commited on
Commit
096d3da
•
1 Parent(s): c1987c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -31
README.md CHANGED
@@ -17,7 +17,7 @@ pipeline_tag: visual-question-answering
17
 
18
  > _Two interns holding hands, symbolizing the integration of InternViT and InternLM._
19
 
20
- \[[InternVL 1.5 Technical Report](https://arxiv.org/abs/2404.16821)\] \[[Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\] \[[Chat Demo](https://internvl.opengvlab.com/)\] \[[中文解读](https://zhuanlan.zhihu.com/p/675877376)]
21
 
22
  We introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding.
23
  We introduce three simple designs:
@@ -34,13 +34,10 @@ We introduce three simple designs:
34
  - Params: 25.5B
35
 
36
  - **Training Strategy:**
37
- - Pretraining Stage
38
- - Learnable Component: ViT + MLP
39
- - Data: Please see our technical report.
40
- - SFT Stage
41
- - Learnable Component: ViT + MLP + LLM
42
- - Data: Please see our technical report.
43
-
44
  ## Released Models
45
 
46
  | Model | Vision Foundation Model | Release Date |Note |
@@ -50,6 +47,10 @@ We introduce three simple designs:
50
  | InternVL-Chat-V1.2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.11 | scaling up LLM to 34B |
51
  | InternVL-Chat-V1.1(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1)) |InternViT-6B-448px-V1-0(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0)) |2024.01.24 | support Chinese and stronger OCR |
52
 
 
 
 
 
53
  ## Performance
54
 
55
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/4b85G7txoJ_LpT19SZJ4A.png)
@@ -58,29 +59,12 @@ We introduce three simple designs:
58
 
59
  ## Examples
60
 
61
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/R34jISP4K1U17m9yNP38O.png)
62
-
63
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/ChkU9XtlsjH0l2EqlO_is.png)
64
-
65
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/1TFxIcf96ANRPLoy4-rbh.png)
66
-
67
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/Wpjo1Sdwf7XcEDevqwcr-.png)
68
-
69
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/kO4-J38sN8TFtmQ5mIBMS.png)
70
-
71
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/qPnTe3Q9UBy8wbclOsmWk.png)
72
-
73
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/l_BILRi13CbZNzbZYn6o6.png)
74
-
75
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/2782y7RnvGBogYEIG__7S.png)
76
-
77
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/RyO35PTH14OFiwyxtAZM2.png)
78
-
79
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/xiLZXWL-JiCTVPnV_VxS2.png)
80
-
81
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/gqX46Tt5jvrcVqb0vcf06.png)
82
-
83
-
84
 
85
  ## Model Usage
86
 
 
17
 
18
  > _Two interns holding hands, symbolizing the integration of InternViT and InternLM._
19
 
20
+ \[[InternVL 1.5 Technical Report](https://arxiv.org/abs/2404.16821)\] \[[CVPR Paper](https://arxiv.org/abs/2312.14238)\] \[[GitHub](https://github.com/OpenGVLab/InternVL)\] \[[Chat Demo](https://internvl.opengvlab.com/)\] \[[中文解读](https://zhuanlan.zhihu.com/p/675877376)]
21
 
22
  We introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding.
23
  We introduce three simple designs:
 
34
  - Params: 25.5B
35
 
36
  - **Training Strategy:**
37
+ - Learnable component in the pretraining stage: ViT + MLP
38
+ - Learnable component in the finetuning stage: ViT + MLP + LLM
39
+ - For more details on training hyperparameters, take a look at our code: [pretrain](https://github.com/OpenGVLab/InternVL/blob/main/internvl_chat/shell/internlm2_20b_dynamic/internvl_chat_v1_5_internlm2_20b_dynamic_res_pretrain.sh) | [finetune](https://github.com/OpenGVLab/InternVL/blob/main/internvl_chat/shell/internlm2_20b_dynamic/internvl_chat_v1_5_internlm2_20b_dynamic_res_finetune.sh)
40
+
 
 
 
41
  ## Released Models
42
 
43
  | Model | Vision Foundation Model | Release Date |Note |
 
47
  | InternVL-Chat-V1.2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-2) ) |InternViT-6B-448px-V1-2(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-2)) |2024.02.11 | scaling up LLM to 34B |
48
  | InternVL-Chat-V1.1(🤗 [HF link](https://huggingface.co/OpenGVLab/InternVL-Chat-V1-1)) |InternViT-6B-448px-V1-0(🤗 [HF link](https://huggingface.co/OpenGVLab/InternViT-6B-448px-V1-0)) |2024.01.24 | support Chinese and stronger OCR |
49
 
50
+ ## Architecture
51
+
52
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/YLvX3V-L0kwsyRn3Lhciw.png)
53
+
54
  ## Performance
55
 
56
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/4b85G7txoJ_LpT19SZJ4A.png)
 
59
 
60
  ## Examples
61
 
62
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/YVr-93mvVMR6UFpGezns7.png)
63
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/ivhj4QqcO2NHUa28DTDkK.png)
64
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/18GeOW10QVcSt5g--TgDY.png)
65
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/tGM_TwdV297H1fCxQ0PZU.png)
66
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/FwlSRBpKgURAVkXNOLoSp.png)
67
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64119264f0f81eb569e0d569/to3nOaAnyv-fGLEoNPLzz.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
 
69
  ## Model Usage
70