DocMath-Eval
๐ Homepage | ๐ค Dataset | ๐ arXiv | GitHub
The data for the paper DocMath-Eval: Evaluating Math Reasoning Capabilities of LLMs in Understanding Long and Specialized Documents. DocMath-Eval is a comprehensive benchmark focused on numerical reasoning within specialized domains. It requires the model to comprehend long and specialized documents and perform numerical reasoning to answer the given question.
DocMath-Eval Dataset
All the data examples were divided into four subsets:
- simpshort, which is reannotated from TAT-QA and FinQA, necessitates simple numerical reasoning over short document with one table
- simplong, which is reannotated from MultiHiertt, necessitates simple numerical reasoning over long document with multiple tables;
- compshort, which is reannotated from TAT-HQA, necessitates complex numerical reasoning over short document with one table;
- complong, which is annotated from scratch by our team, necessitates complex numerical reasoning over long document with multiple tables.
For each subset, we provide the testmini and test splits.
You can download this dataset by the following command:
from datasets import load_dataset
dataset = load_dataset("yale-nlp/DocMath-Eval")
# print the first example on the complong testmini set
print(dataset["complong-testmini"][0])
The dataset is provided in json format and contains the following attributes:
{
"question_id": [string] The question id
"source": [string] The original source of the example (for simpshort, simplong, and compshort sets)
"original_question_id": [string] The original question id (for simpshort, simplong, and compshort sets)
"question": [string] The question text
"paragraphs": [list] List of paragraphs and tables within the document
"table_evidence": [list] List of indices in 'paragraphs' that are used as table evidence for the question
"paragraph_evidence": [list] List of indices in 'paragraphs' that are used as text evidence for the question
"python_solution": [string] Python-format and executable solution. This feature is hidden for the test set
"ground_truth": [float] Executed result of 'python_solution'. This feature is hidden for the test set
}
Contact
For any issues or questions, kindly email us at: Yilun Zhao ([email protected]).
Citation
If you use the DocMath-Eval benchmark in your work, please kindly cite the paper:
@misc{zhao2024docmatheval,
title={DocMath-Eval: Evaluating Math Reasoning Capabilities of LLMs in Understanding Long and Specialized Documents},
author={Yilun Zhao and Yitao Long and Hongjun Liu and Ryo Kamoi and Linyong Nan and Lyuhao Chen and Yixin Liu and Xiangru Tang and Rui Zhang and Arman Cohan},
year={2024},
eprint={2311.09805},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2311.09805},
}
- Downloads last month
- 619