leaderboard-pr-bot
commited on
Commit
•
ca31669
1
Parent(s):
7e45fc9
Adding Evaluation Results
Browse filesThis is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr
The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.
If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions
README.md
CHANGED
@@ -1,19 +1,19 @@
|
|
1 |
---
|
|
|
|
|
2 |
license: apache-2.0
|
3 |
-
base_model: Rijgersberg/GEITje-7B
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
- GEITje
|
7 |
-
model-index:
|
8 |
-
- name: GEITje-7B-chat-v2
|
9 |
-
results: []
|
10 |
datasets:
|
11 |
- Rijgersberg/no_robots_nl
|
12 |
- Rijgersberg/ultrachat_10k_nl
|
13 |
- BramVanroy/dutch_chat_datasets
|
14 |
-
|
15 |
-
- nl
|
16 |
pipeline_tag: conversational
|
|
|
|
|
|
|
17 |
---
|
18 |
# GEITje-7B-chat-v2
|
19 |
|
@@ -99,4 +99,17 @@ The following hyperparameters were used during training:
|
|
99 |
- Transformers 4.36.0.dev0
|
100 |
- Pytorch 2.1.1+cu121
|
101 |
- Datasets 2.15.0
|
102 |
-
- Tokenizers 0.15.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- nl
|
4 |
license: apache-2.0
|
|
|
5 |
tags:
|
6 |
- generated_from_trainer
|
7 |
- GEITje
|
|
|
|
|
|
|
8 |
datasets:
|
9 |
- Rijgersberg/no_robots_nl
|
10 |
- Rijgersberg/ultrachat_10k_nl
|
11 |
- BramVanroy/dutch_chat_datasets
|
12 |
+
base_model: Rijgersberg/GEITje-7B
|
|
|
13 |
pipeline_tag: conversational
|
14 |
+
model-index:
|
15 |
+
- name: GEITje-7B-chat-v2
|
16 |
+
results: []
|
17 |
---
|
18 |
# GEITje-7B-chat-v2
|
19 |
|
|
|
99 |
- Transformers 4.36.0.dev0
|
100 |
- Pytorch 2.1.1+cu121
|
101 |
- Datasets 2.15.0
|
102 |
+
- Tokenizers 0.15.0
|
103 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
104 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Rijgersberg__GEITje-7B-chat-v2)
|
105 |
+
|
106 |
+
| Metric |Value|
|
107 |
+
|---------------------------------|----:|
|
108 |
+
|Avg. |50.79|
|
109 |
+
|AI2 Reasoning Challenge (25-Shot)|50.34|
|
110 |
+
|HellaSwag (10-Shot) |74.13|
|
111 |
+
|MMLU (5-Shot) |49.00|
|
112 |
+
|TruthfulQA (0-shot) |43.55|
|
113 |
+
|Winogrande (5-shot) |71.51|
|
114 |
+
|GSM8k (5-shot) |16.22|
|
115 |
+
|