simple-math / README.md
fblgit's picture
Update README.md
dc803be verified
metadata
dataset_info:
  features:
    - name: output
      dtype: string
    - name: instruction
      dtype: string
  splits:
    - name: arithmetic.float2_train
      num_bytes: 645500.3
      num_examples: 19000
    - name: arithmetic.float2_valid
      num_bytes: 33973.7
      num_examples: 1000
    - name: arithmetic.float3_train
      num_bytes: 1890863.85
      num_examples: 47500
    - name: arithmetic.float3_valid
      num_bytes: 99519.15
      num_examples: 2500
    - name: arithmetic.float34_train
      num_bytes: 9321513.05
      num_examples: 218500
    - name: arithmetic.float34_valid
      num_bytes: 490605.95
      num_examples: 11500
    - name: arithmetic.float4_train
      num_bytes: 21671996.6
      num_examples: 475000
    - name: arithmetic.float4_valid
      num_bytes: 1140631.4
      num_examples: 25000
  download_size: 27928049
  dataset_size: 35294604
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*
tags:
  - math
  - finance
license: cc-by-nc-nd-4.0
task_categories:
  - text-generation
  - question-answering
pretty_name: Simple Math
size_categories:
  - 100K<n<1M

Simple Math: 2+2=4 -1=3 (LoLo: Learning Only Logical Operations)

Just like my teacher gave me homework, i thought maybe we can also add some of these basics on the trainings of our models.

It was created with very simple code that is in the repo, if you add more complex operations and so.. please share the code :D thank you

Current Code Version: 20240127.fblgit (A modification over @win10 for progressive and DPO operation) LoLo: Learning Only Logical Operations

Does it Works?

34BEAGLES Evaluation:

hf (pretrained=/data/models/UNA-34Beagles-v1-final,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto (8)
|    Tasks     |Version|Filter|n-shot| Metric |Value |   |Stderr|
|--------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge |Yaml   |none  |    25|acc     |0.7039|±  |0.0133|
|              |       |none  |    25|acc_norm|0.7321|±  |0.0129|
|truthfulqa_mc2|Yaml   |none  |     0|acc     |0.7387|±  |0.0141|

hf (pretrained=/data/models/UNA-34Beagles-v1-final,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version|  Filter  |n-shot|  Metric   |Value |   |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml   |get-answer|     5|exact_match|0.6399|±  |0.0132|

|      Groups      |Version|Filter|n-shot|Metric|Value |   |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu              |N/A    |none  |     0|acc   |0.7477|±  |0.1079|
| - humanities     |N/A    |none  |     0|acc   |0.7188|±  |0.0855|
| - other          |N/A    |none  |     0|acc   |0.7950|±  |0.1057|
| - social_sciences|N/A    |none  |     0|acc   |0.8297|±  |0.0664|
| - stem           |N/A    |none  |     0|acc   |0.6641|±  |0.1291|

34BEAGLES-MATH Evaluation

hf (pretrained=/data/models/34BeaglesMath-v1,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version|  Filter  |n-shot|  Metric   |Value |   |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml   |get-answer|     5|exact_match|0.6505|±  |0.0131|

hf (pretrained=/data/models/34BeaglesMath-v1,dtype=bfloat16,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto (8)
|    Tasks     |Version|Filter|n-shot| Metric |Value |   |Stderr|
|--------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge |Yaml   |none  |    25|acc     |0.7090|±  |0.0133|
|              |       |none  |    25|acc_norm|0.7329|±  |0.0129|
|truthfulqa_mc2|Yaml   |none  |     0|acc     |0.7378|±  |0.0141|

|      Groups      |Version|Filter|n-shot|Metric|Value |   |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu              |N/A    |none  |     0|acc   |0.7524|±  |0.1045|
| - humanities     |N/A    |none  |     0|acc   |0.7307|±  |0.0846|
| - other          |N/A    |none  |     0|acc   |0.7937|±  |0.1029|
| - social_sciences|N/A    |none  |     0|acc   |0.8274|±  |0.0667|
| - stem           |N/A    |none  |     0|acc   |0.6708|±  |0.1236|

But it gets better, because when increasing length and complexity, the marks are even superior:

|Tasks|Version|  Filter  |n-shot|  Metric   |Value |   |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml   |get-answer|     5|exact_match|0.6611|±  | 0.013|

On a 3.20% GSM Improvement compared to its base model.

Note to contributors:

thank you to those contributing on the experiment with beautiful commits and good spirit

  • Feel free to contribute on the readme Evaluation tests.
  • Lets aim to build an ablation & paper together. All contributors will be cited.

Versions

27.01.24 Added new code to generate the dataset, seed 42 and now also generates DPO.
24.01.24 Added gradual complexity on a separate script
20-23.01.24 Multiple contributions with operations and increased complexity on the main generator script.

Citations

If you use Simple Math o train your model, please cite on the modelcard or the paper.

@misc{simplemath,
  title={Simple-Math: 2+2=4 4-1=3}, 
  author={Xavier Murias},
  year={2024},
  publisher = {Juanako.AI},
  journal = {HuggingFace repository},
  howpublished = {\url{https://huggingface.co/datasets/fblgit/simple-math}},
}