Qwen1.5-0.5B-vortex-v2 model card
Qwen1.5-0.5B-vortex-v2 is a dealigned chat finetune of the original fantastic Qwen1.5-0.5B model by the Qwen team.
This model was trained on the Vortex mini dataset and alpaca-cleaned using axolotl for 4 epoch
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 36.45 |
AI2 Reasoning Challenge (25-Shot) | 30.63 |
HellaSwag (10-Shot) | 45.54 |
MMLU (5-Shot) | 36.29 |
TruthfulQA (0-shot) | 44.29 |
Winogrande (5-shot) | 56.04 |
GSM8k (5-shot) | 5.91 |
- Downloads last month
- 7
Inference API (serverless) is not available, repository is disabled.
Datasets used to train Abhaykoul/Qwen1.5-0.5B-vortex-0.1
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard30.630
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard45.540
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard36.290
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard44.290
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard56.040
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard5.910