Datasets:

Modalities:
Image
Languages:
English
Size:
< 1K
Libraries:
Datasets
License:
SEED-Bench-2-plus / README.md
BreakLee's picture
Update README.md
e8d1c82 verified
|
raw
history blame
1.48 kB
metadata
license: cc-by-nc-4.0
task_categories:
  - visual-question-answering
language:
  - en
pretty_name: SEED-Bench-2-Plus
size_categories:
  - 1K<n<10K

SEED-Bench-2-Plus Card

Benchmark details

Benchmark type: SEED-Bench-2-Plus is a large-scale benchmark to evaluate Multimodal Large Language Models (MLLMs). It consists of 2.3K multiple-choice questions with precise human annotations, spanning three broad categories: Charts, Maps, and Webs, each of which covers a wide spectrum of text-rich scenarios in the real world.

Benchmark date: SEED-Bench-2-Plus was collected in April 2024.

Paper or resources for more information: https://github.com/AILab-CVC/SEED-Bench

License: Attribution-NonCommercial 4.0 International. It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use.

For the images of SEED-Bench-2-plus, we use data from the internet under CC-BY licenses. Please contact us if you believe any data infringes upon your rights, and we will remove it.

Where to send questions or comments about the benchmark: https://github.com/AILab-CVC/SEED-Bench/issues

Intended use

Primary intended uses: The primary use of SEED-Bench-2-Plus is evaluate Multimodal Large Language Models on text-rich visual understanding.

Primary intended users: The primary intended users of the Benchmark are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.