Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,32 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
|
6 |
+
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65ae21adabf6d1ccb795e9a4/2K48kJlYndyPbiwVqwaRj.jpeg)
|
7 |
+
|
8 |
+
# Dataset Card for Omni-MATH
|
9 |
+
|
10 |
+
<!-- Provide a quick summary of the dataset. -->
|
11 |
+
|
12 |
+
Recent advancements in AI, particularly in large language models (LLMs), have led to significant breakthroughs in mathematical reasoning capabilities. However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e.g., OpenAI o1 achieves 94.8% on MATH dataset), indicating their inadequacy for truly challenging these models. To mitigate this limitation, we propose a comprehensive and challenging benchmark specifically designed to assess LLMs' mathematical reasoning at the Olympiad level. Unlike existing Olympiad-related benchmarks, our dataset focuses exclusively on mathematics and comprises a vast collection of 4428 competition-level problems. These problems are meticulously categorized into 33 (and potentially more) sub-domains and span across 10 distinct difficulty levels, enabling a nuanced analysis of model performance across various mathematical disciplines and levels of complexity.
|
13 |
+
|
14 |
+
* Project Page: https://omni-math.github.io/
|
15 |
+
* Github Repo: https://github.com/KbsdJames/Omni-MATH
|
16 |
+
* Omni-Judge (opensource evaluator of this dataset): https://huggingface.co/KbsdJames/Omni-Judge
|
17 |
+
|
18 |
+
## Dataset Details
|
19 |
+
|
20 |
+
|
21 |
+
## Uses
|
22 |
+
|
23 |
+
<!-- Address questions around how the dataset is intended to be used. -->
|
24 |
+
```python
|
25 |
+
from datasets import load_dataset
|
26 |
+
dataset = load_dataset("KbsdJames/MathMinos-Natural-language-feedback")
|
27 |
+
|
28 |
+
```
|
29 |
+
For further examniation of the model, please refer to our github repository: https://github.com/KbsdJames/Omni-MATH
|
30 |
+
|
31 |
+
## Citation
|
32 |
+
If you do find our code helpful or use our benchmark dataset, please citing our paper (To be done).
|