The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: JSON parse error: Column(/metadata/lesson_number) changed from string to number in row 2 Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables dataset = json.load(f) File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load return loads(fp.read(), File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads return _default_decoder.decode(s) File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode raise JSONDecodeError("Extra data", s, end) json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 907) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 240, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2216, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1239, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1389, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1044, in __iter__ yield from islice(self.ex_iterable, self.n) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__ for key, pa_table in self.generate_tables_fn(**self.kwargs): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables raise e File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables pa_table = paj.read_json( File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column(/metadata/lesson_number) changed from string to number in row 2
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
MathFish
This dataset is introduced by "Evaluating Language Model Math Reasoning via Grounding in Educational Curricula", and includes problems drawn from two open educational resources (OER): Illustrative Mathematics and Fishtank Learning. Problems are labeled with mathematical standards, which are K-12 skills and concepts that problems enable students to learn. These standards are defined and organized by Common Core State Standards.
Additional components of MathFish can be found at:
- allenai/achieve-the-core: Common Core mathematical standards and their descriptions
- allenai/mathfish_tasks: MathFish's dev set problems inserted into verification and tagging prompts for language models
Code to support Mathfish can be found in this Github repository.
Dataset Details
Dataset Description
Common Core State Standards (CCSS) offer fine-grained and comprehensive coverage of K-12 math skills/concepts. We scrape labeled problems from two reputable OER that span a wide range of grade levels and standards: Illustrative Mathematics and Fishtank Learning. Each problem is a segment of these materials demarcated by standards labels, and a problem may be labeled with multiple standards.
Number of problems: 4356 in dev.jsonl
, 4355 in test.jsonl
, 13065 in train.jsonl
. In total, 21776 K-12 math problems.
Number of images: 1848 in fl_problem
, 11736 in im_lesson
, 27 in im_modelingprompt
, 3497 in im_practice
, 860 in im_task
. In total, 17968 math images.
- Curated by: Lucy Li, Tal August, Rose E Wang, Luca Soldaini, Courtney Allison, Kyle Lo
- Funded by: The Gates Foundation
- Language(s) (NLP): English
- License: ODC-By 1.0
Uses
Direct Use
This dataset was originally created to evaluate models' abilities to identify math skills and concepts using publisher-labeled data pulled from curricular websites. This data may support investigations into the use of language models to support K-12 education.
Illustrative Mathematics is licensed as CC BY 4.0, while Fishtank Learning component is licensed under Creative Commons BY-NC-SA 4.0. Both sources are intended to be OER, which is defined as teaching, learning, and research materials that provides users free and perpetual permission to "retain, reuse, revise, remix, and redistribute" for educational purposes.
Out-of-Scope Use
Note that Fishtank Learning's original license prohibits commercial use.
Dataset Structure
Each *.jsonl
file contains one problem or activity per line:
{
id: '', # this is global
text: ‘string representing activity or problem’,
metadata: { source id, unit, lesson, other location data , url if possible, html version}, # this is source-specific
acquisition_date: '', # YYYY-MM-DD
elements: {identifier : name of image file or html of table}, # table, img, figure interweaved with text
standards: [list of (relation, standard)], # relation could be addressing, alignment, building towards, etc
source: '',
}
Note: Among standard relation types, Addressing
== Alignment
, and we evaluate on these in our paper. Future work may investigate other types of relations between problems and math skills/concepts. Not all problems in each file contain standards.
Images are in the images
folder, in zipped files named after image filenames' prefixes: fl_problem
, im_lesson
, im_modelingprompt
, im_practice
, im_task
.
Dataset Creation
Curation Rationale
Math standards are informed by human learning progressions, and commonly used in real-world reviews of math content. In education, materials have focused alignment with a standard if they enable students to learn the full intent of concepts/skills described by that standard. Identifying alignment can thus inform educators whether a set of materials adequately targets core learning goals for students.
Data Collection and Processing
We pull problems from several parts of Illustrative Mathematics curriculum: tasks, centers, practice problems, lessons, and modeling prompts. For Fishtank learning, we pull problems from the lessons section of their website. What is considered a "lesson" and what is considered a "problem" or "task" is an artifact of the materials themselves. Some problems are hands-on group activities, while others are assessment-type problems.
Who are the source data producers?
Illustrative Mathematics and Fishtank Learning are nonprofit educational organizations in the United States.
Bias, Risks, and Limitations
Though these problems offer substantial coverage of a common K-12 curriculum in the United States, they may not directly translate to pedagogical standards or practices in other socio-cultural contexts.
Recommendations
Though language models have the potential to automate the task of identifying standards alignment in curriculum or improve educational instruction, their rule in education should be a supporting, rather than leading, one. To design such tools, we believe that it is best to co-create with teachers and curriculum specialists.
Citation
@misc{lucy2024evaluatinglanguagemodelmath,
title={Evaluating Language Model Math Reasoning via Grounding in Educational Curricula},
author={Li Lucy and Tal August and Rose E. Wang and Luca Soldaini and Courtney Allison and Kyle Lo},
year={2024},
eprint={2408.04226},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2408.04226},
}
Dataset Card Contact
- Downloads last month
- 80