Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ pipeline_tag: visual-question-answering
|
|
22 |
We introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding.
|
23 |
We introduce three simple designs:
|
24 |
1. Strong Vision Encoder: we explored a continuous learning strategy for the large-scale vision foundation model---InternViT-6B, boosting its visual understanding capabilities, and making it can be transferred and reused in different LLMs.
|
25 |
-
2. Dynamic High-Resolution: we divide images into tiles ranging from 1 to 32 of 448
|
26 |
3. High-Quality Bilingual Dataset: we carefully collected a high-quality bilingual dataset that covers common scenes, document images, and annotated them with English and Chinese question-answer pairs, significantly enhancing performance in OCR- and Chinese-related tasks.
|
27 |
|
28 |
|
|
|
22 |
We introduce InternVL 1.5, an open-source multimodal large language model (MLLM) to bridge the capability gap between open-source and proprietary commercial models in multimodal understanding.
|
23 |
We introduce three simple designs:
|
24 |
1. Strong Vision Encoder: we explored a continuous learning strategy for the large-scale vision foundation model---InternViT-6B, boosting its visual understanding capabilities, and making it can be transferred and reused in different LLMs.
|
25 |
+
2. Dynamic High-Resolution: we divide images into tiles ranging from 1 to 32 of 448 × 448 pixels according to the aspect ratio and resolution of the input images, which supports up to 4K resolution input.
|
26 |
3. High-Quality Bilingual Dataset: we carefully collected a high-quality bilingual dataset that covers common scenes, document images, and annotated them with English and Chinese question-answer pairs, significantly enhancing performance in OCR- and Chinese-related tasks.
|
27 |
|
28 |
|