Commit
b8cae9b
1 Parent(s): 0dd0086

Adding Evaluation Results (#3)

Browse files

- Adding Evaluation Results (e9761e2217b5d5c5bd92e3e2ccdd808b602c06a1)


Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>

Files changed (1) hide show
  1. README.md +117 -0
README.md CHANGED
@@ -1,6 +1,109 @@
1
  ---
2
  library_name: peft
3
  base_model: TheBloke/Llama-2-13B-fp16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
  ## Training procedure
6
 
@@ -69,3 +172,17 @@ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-le
69
  | Winogrande (5-shot) | 76.24 |
70
  | GSM8K (5-shot) | 12.05 |
71
  | DROP (3-shot) | 14.53 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: peft
3
  base_model: TheBloke/Llama-2-13B-fp16
4
+ model-index:
5
+ - name: minotaur-llama2-13b-qlora
6
+ results:
7
+ - task:
8
+ type: text-generation
9
+ name: Text Generation
10
+ dataset:
11
+ name: AI2 Reasoning Challenge (25-Shot)
12
+ type: ai2_arc
13
+ config: ARC-Challenge
14
+ split: test
15
+ args:
16
+ num_few_shot: 25
17
+ metrics:
18
+ - type: acc_norm
19
+ value: 60.07
20
+ name: normalized accuracy
21
+ source:
22
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora
23
+ name: Open LLM Leaderboard
24
+ - task:
25
+ type: text-generation
26
+ name: Text Generation
27
+ dataset:
28
+ name: HellaSwag (10-Shot)
29
+ type: hellaswag
30
+ split: validation
31
+ args:
32
+ num_few_shot: 10
33
+ metrics:
34
+ - type: acc_norm
35
+ value: 82.42
36
+ name: normalized accuracy
37
+ source:
38
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora
39
+ name: Open LLM Leaderboard
40
+ - task:
41
+ type: text-generation
42
+ name: Text Generation
43
+ dataset:
44
+ name: MMLU (5-Shot)
45
+ type: cais/mmlu
46
+ config: all
47
+ split: test
48
+ args:
49
+ num_few_shot: 5
50
+ metrics:
51
+ - type: acc
52
+ value: 55.87
53
+ name: accuracy
54
+ source:
55
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora
56
+ name: Open LLM Leaderboard
57
+ - task:
58
+ type: text-generation
59
+ name: Text Generation
60
+ dataset:
61
+ name: TruthfulQA (0-shot)
62
+ type: truthful_qa
63
+ config: multiple_choice
64
+ split: validation
65
+ args:
66
+ num_few_shot: 0
67
+ metrics:
68
+ - type: mc2
69
+ value: 45.57
70
+ source:
71
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora
72
+ name: Open LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: Winogrande (5-shot)
78
+ type: winogrande
79
+ config: winogrande_xl
80
+ split: validation
81
+ args:
82
+ num_few_shot: 5
83
+ metrics:
84
+ - type: acc
85
+ value: 76.24
86
+ name: accuracy
87
+ source:
88
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora
89
+ name: Open LLM Leaderboard
90
+ - task:
91
+ type: text-generation
92
+ name: Text Generation
93
+ dataset:
94
+ name: GSM8k (5-shot)
95
+ type: gsm8k
96
+ config: main
97
+ split: test
98
+ args:
99
+ num_few_shot: 5
100
+ metrics:
101
+ - type: acc
102
+ value: 12.05
103
+ name: accuracy
104
+ source:
105
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ehartford/minotaur-llama2-13b-qlora
106
+ name: Open LLM Leaderboard
107
  ---
108
  ## Training procedure
109
 
 
172
  | Winogrande (5-shot) | 76.24 |
173
  | GSM8K (5-shot) | 12.05 |
174
  | DROP (3-shot) | 14.53 |
175
+
176
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
177
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ehartford__minotaur-llama2-13b-qlora)
178
+
179
+ | Metric |Value|
180
+ |---------------------------------|----:|
181
+ |Avg. |55.37|
182
+ |AI2 Reasoning Challenge (25-Shot)|60.07|
183
+ |HellaSwag (10-Shot) |82.42|
184
+ |MMLU (5-Shot) |55.87|
185
+ |TruthfulQA (0-shot) |45.57|
186
+ |Winogrande (5-shot) |76.24|
187
+ |GSM8k (5-shot) |12.05|
188
+