Edit model card

A Llama Chat Model of 101M Parameters

Recommended Prompt Format

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant

Recommended Inference Parameters

penalty_alpha: 0.5
top_k: 4
repetition_penalty: 1.105

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.73
AI2 Reasoning Challenge (25-Shot) 22.87
HellaSwag (10-Shot) 28.69
MMLU (5-Shot) 24.93
TruthfulQA (0-shot) 45.76
Winogrande (5-shot) 50.04
GSM8k (5-shot) 0.08
Downloads last month
233
Safetensors
Model size
101M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for Felladrin/Smol-Llama-101M-Chat-v1

Finetuned
this model
Finetunes
4 models
Quantizations
3 models

Datasets used to train Felladrin/Smol-Llama-101M-Chat-v1

Spaces using Felladrin/Smol-Llama-101M-Chat-v1 2

Collection including Felladrin/Smol-Llama-101M-Chat-v1

Evaluation results