Puma-3B / README.md
acrastt's picture
Adding Evaluation Results
871249b
|
raw
history blame
1.6 kB
metadata
license: apache-2.0
datasets:
  - totally-not-an-llm/sharegpt-hyperfiltered-3k
language:
  - en
library_name: transformers
pipeline_tag: text-generation

Buy Me A Coffee

This is OpenLLaMA 3B V2 finetuned on ShareGPT Hyperfiltered for 1 epochs.

Prompt template:

### HUMAN:
{prompt}

### RESPONSE:
<leave a newline for the model to answer>

GGML quants available here.
GPTQ quants available here.

Note: Don't expect this model to be good, I was just starting out to finetune. So don't roast me please!

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 35.93
ARC (25-shot) 41.3
HellaSwag (10-shot) 71.85
MMLU (5-shot) 27.51
TruthfulQA (0-shot) 38.34
Winogrande (5-shot) 66.38
GSM8K (5-shot) 0.76
DROP (3-shot) 5.38