The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image
image | label
class label |
---|---|
0v1
|
|
0v1
|
|
0v1
|
|
0v1
|
|
0v1
|
|
0v1
|
|
0v1
|
|
0v1
|
|
1v10
|
|
1v10
|
|
1v10
|
|
1v10
|
|
1v10
|
|
1v10
|
|
1v10
|
|
1v10
|
|
2v100
|
|
2v100
|
|
2v100
|
|
2v100
|
|
2v100
|
|
2v100
|
|
2v100
|
|
2v100
|
|
3v1001
|
|
3v1001
|
|
3v1001
|
|
3v1001
|
|
3v1001
|
|
3v1001
|
|
3v1001
|
|
3v1001
|
|
4v1004
|
|
4v1004
|
|
4v1004
|
|
4v1004
|
|
4v1004
|
|
4v1004
|
|
4v1004
|
|
4v1004
|
|
5v1006
|
|
5v1006
|
|
5v1006
|
|
5v1006
|
|
5v1006
|
|
5v1006
|
|
5v1006
|
|
5v1006
|
|
6v1007
|
|
6v1007
|
|
6v1007
|
|
6v1007
|
|
6v1007
|
|
6v1007
|
|
6v1007
|
|
6v1007
|
|
7v1009
|
|
7v1009
|
|
7v1009
|
|
7v1009
|
|
7v1009
|
|
7v1009
|
|
7v1009
|
|
7v1009
|
|
8v101
|
|
8v101
|
|
8v101
|
|
8v101
|
|
8v101
|
|
8v101
|
|
8v101
|
|
8v101
|
|
9v1010
|
|
9v1010
|
|
9v1010
|
|
9v1010
|
|
9v1010
|
|
9v1010
|
|
9v1010
|
|
9v1010
|
|
10v1011
|
|
10v1011
|
|
10v1011
|
|
10v1011
|
|
10v1011
|
|
10v1011
|
|
10v1011
|
|
10v1011
|
|
11v1012
|
|
11v1012
|
|
11v1012
|
|
11v1012
|
|
11v1012
|
|
11v1012
|
|
11v1012
|
|
11v1012
|
|
12v1013
|
|
12v1013
|
|
12v1013
|
|
12v1013
|
SEED-Bench Card
Benchmark details
Benchmark type: SEED-Bench is a large-scale benchmark to evaluate Multimodal Large Language Models (MLLMs). It consists of 19K multiple choice questions with accurate human annotations, which covers 12 evaluation dimensions including the comprehension of both the image and video modality.
Benchmark date: SEED-Bench was collected in July 2023.
Paper or resources for more information: https://github.com/AILab-CVC/SEED-Bench
License: Attribution-NonCommercial 4.0 International. It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use.
For the images of SEED-Bench, we use the data from Conceptual Captions Dataset (https://ai.google.com/research/ConceptualCaptions/) following its license (https://github.com/google-research-datasets/conceptual-captions/blob/master/LICENSE). Tencent does not hold the copyright for these images and the copyright belongs to the original owner of Conceptual Captions Dataset.
For the videos of SEED-Bench, we use tha data from Something-Something v2 (https://developer.qualcomm.com/software/ai-datasets/something-something), Epic-kitchen 100 (https://epic-kitchens.github.io/2023) and Breakfast (https://serre-lab.clps.brown.edu/resource/breakfast-actions-dataset/). We only provide the video name. Please download them in their official websites.
Where to send questions or comments about the benchmark: https://github.com/AILab-CVC/SEED-Bench/issues
Intended use
Primary intended uses: The primary use of SEED-Bench is evaluate Multimodal Large Language Models on spatial and temporal understanding.
Primary intended users: The primary intended users of the Benchmark are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.
- Downloads last month
- 40