The MERIT Dataset: Modelling and Efficiently Rendering Interpretable Transcripts
Abstract
This paper introduces the MERIT Dataset, a multimodal (text + image + layout) fully labeled dataset within the context of school reports. Comprising over 400 labels and 33k samples, the MERIT Dataset is a valuable resource for training models in demanding Visually-rich Document Understanding (VrDU) tasks. By its nature (student grade reports), the MERIT Dataset can potentially include biases in a controlled way, making it a valuable tool to benchmark biases induced in Language Models (LLMs). The paper outlines the dataset's generation pipeline and highlights its main features in the textual, visual, layout, and bias domains. To demonstrate the dataset's utility, we present a benchmark with token classification models, showing that the dataset poses a significant challenge even for SOTA models and that these would greatly benefit from including samples from the MERIT Dataset in their pretraining phase.
Community
You can explore more here:
- Synthetic Generation Pipeline: https://github.com/nachoDRT/MERIT-Dataset
- Merit Dataset on Hugging Face: https://huggingface.co/datasets/de-Rodrigo/merit
- Wandb Token Classification Benchmark: https://wandb.ai/iderodrigo/MERIT-Dataset?nw=nwuseriderodrigo
Thanks!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SynthDoc: Bilingual Documents Synthesis for Visual Document Understanding (2024)
- Openstory++: A Large-scale Dataset and Benchmark for Instance-aware Open-domain Visual Storytelling (2024)
- Deep Learning based Visually Rich Document Content Understanding: A Survey (2024)
- Img-Diff: Contrastive Data Synthesis for Multimodal Large Language Models (2024)
- Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper