Adding Evaluation Results

#5
by acrastt - opened
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -23,4 +23,17 @@ Prompt template:
23
  GGML quants available [here](https://huggingface.co/TheBloke/Puma-3b-GGML).</br>
24
  GPTQ quants available [here](https://huggingface.co/TheBloke/Puma-3b-GPTQ).
25
 
26
- Note: Don't expect this model to be good, I was just starting out to finetune. So don't roast me please!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  GGML quants available [here](https://huggingface.co/TheBloke/Puma-3b-GGML).</br>
24
  GPTQ quants available [here](https://huggingface.co/TheBloke/Puma-3b-GPTQ).
25
 
26
+ Note: Don't expect this model to be good, I was just starting out to finetune. So don't roast me please!
27
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
28
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Puma-3B)
29
+
30
+ | Metric | Value |
31
+ |-----------------------|---------------------------|
32
+ | Avg. | 35.93 |
33
+ | ARC (25-shot) | 41.3 |
34
+ | HellaSwag (10-shot) | 71.85 |
35
+ | MMLU (5-shot) | 27.51 |
36
+ | TruthfulQA (0-shot) | 38.34 |
37
+ | Winogrande (5-shot) | 66.38 |
38
+ | GSM8K (5-shot) | 0.76 |
39
+ | DROP (3-shot) | 5.38 |