Edit model card

Math-chunk-refining-lm

ArXiv | Code

Math-chunk-refining-lm is an adapted 0.3B-ProX model, fine-tuned for doc level refining via program generation, and can be applied over math pre-training corpus such as open-web-math.

Citation

@article{zhou2024programming,
  title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
  author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
  journal={arXiv preprint arXiv:2409.17115},
  year={2024}
}
Downloads last month
20
Safetensors
Model size
354M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for gair-prox/math-chunk-refining-lm

Finetuned
(3)
this model

Dataset used to train gair-prox/math-chunk-refining-lm

Collection including gair-prox/math-chunk-refining-lm