TinyMistral-248M-v2.5
This model was created by merging TinyMistral-248M-v1 and v2, then further pretraining on synthetic textbooks. The resulting model's performance is superior to both, after personal evaluation.
During training, this model reached an average perplexity score of 4, outperforming V1 by nearly 7x, and V2 by 4x.
You can use the following config to reproduce the merged model:
base_model: Locutusque/TinyMistral-248M-v2
dtype: float16
merge_method: ties
parameters:
int8_mask: 1.0
normalize: 1.0
slices:
- sources:
- layer_range: [0, 12]
model: Locutusque/TinyMistral-248M
parameters:
density: [1.0, 0.7, 0.1]
weight: 1.0
- layer_range: [0, 12]
model: Locutusque/TinyMistral-248M-v2
parameters:
density: 0.5
weight: [0.0, 0.3, 0.7, 1.0]
This model can also answer basic questions, without needing to do any fine-tuning.
This model was also created as an attempt to fix the issue with V2 - the weights were prone to exploding gradients, making it difficult to fine-tune. This model is easier to fine-tune.
To get the best out of this model, I recommend installing it, and trying it out yourself, as the model's performance seems to degrade in the inference API.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 28.29 |
AI2 Reasoning Challenge (25-Shot) | 24.57 |
HellaSwag (10-Shot) | 27.49 |
MMLU (5-Shot) | 23.15 |
TruthfulQA (0-shot) | 46.72 |
Winogrande (5-shot) | 47.83 |
GSM8k (5-shot) | 0.00 |
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 3.87 |
IFEval (0-Shot) | 13.36 |
BBH (3-Shot) | 3.18 |
MATH Lvl 5 (4-Shot) | 0.00 |
GPQA (0-shot) | 0.11 |
MuSR (0-shot) | 5.07 |
MMLU-PRO (5-shot) | 1.50 |
- Downloads last month
- 856
Model tree for Locutusque/TinyMistral-248M-v2.5
Datasets used to train Locutusque/TinyMistral-248M-v2.5
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard24.570
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard27.490
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard23.150
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard46.720
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard47.830
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard0.000
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard13.360
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard3.180
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard0.000
- acc_norm on GPQA (0-shot)Open LLM Leaderboard0.110