Update README.md
Browse files
README.md
CHANGED
@@ -16,3 +16,27 @@ tags:
|
|
16 |
- trl
|
17 |
- sft
|
18 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
- trl
|
17 |
- sft
|
18 |
---
|
19 |
+
|
20 |
+
# xsanskarx/qwen2-0.5b_numina_math-instruct
|
21 |
+
|
22 |
+
This repository contains a fine-tuned version of the Qwen-2 0.5B model specifically optimized for mathematical instruction understanding and reasoning. It builds upon the Numina dataset, which provides a rich source of mathematical problems and solutions designed to enhance reasoning capabilities even in smaller language models.
|
23 |
+
|
24 |
+
## Motivation
|
25 |
+
|
26 |
+
My primary motivation is the hypothesis that high-quality datasets focused on mathematical reasoning can significantly improve the performance of smaller models on tasks that require logical deduction and problem-solving. Uploading benchmarks is the next step in evaluating this claim.
|
27 |
+
|
28 |
+
## Model Details
|
29 |
+
|
30 |
+
* **Base Model:** Qwen-2 0.5B
|
31 |
+
* **Fine-tuning Dataset:** Numina COT
|
32 |
+
* **Key Improvements:** Enhanced ability to parse mathematical instructions, solve problems, and provide step-by-step explanations.
|
33 |
+
|
34 |
+
## Usage
|
35 |
+
|
36 |
+
You can easily load and use this model with the Hugging Face Transformers library:
|
37 |
+
|
38 |
+
```python
|
39 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
40 |
+
|
41 |
+
tokenizer = AutoTokenizer.from_pretrained("xsanskarx/qwen2-0.5b_numina_math-instruct")
|
42 |
+
model = AutoModelForCausalLM.from_pretrained("xsanskarx/qwen2-0.5b_numina_math-instruct")
|