Adding Evaluation Results

#2
Files changed (1) hide show
  1. README.md +14 -1
README.md CHANGED
@@ -1,4 +1,17 @@
1
  CAMEL-13B-Role-Playing-Data is a chat large language model obtained by finetuning LLaMA-13B model on a total of 229K conversations created through our role-playing framework proposed in [CAMEL](https://arxiv.org/abs/2303.17760). We evaluate our model offline using EleutherAI's language model evaluation harness used by Huggingface's Open LLM Benchmark. CAMEL-13B scores an average of 57.2.
2
  ---
3
  license: cc-by-nc-4.0
4
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  CAMEL-13B-Role-Playing-Data is a chat large language model obtained by finetuning LLaMA-13B model on a total of 229K conversations created through our role-playing framework proposed in [CAMEL](https://arxiv.org/abs/2303.17760). We evaluate our model offline using EleutherAI's language model evaluation harness used by Huggingface's Open LLM Benchmark. CAMEL-13B scores an average of 57.2.
2
  ---
3
  license: cc-by-nc-4.0
4
+ ---
5
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
6
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_camel-ai__CAMEL-13B-Role-Playing-Data)
7
+
8
+ | Metric | Value |
9
+ |-----------------------|---------------------------|
10
+ | Avg. | 45.03 |
11
+ | ARC (25-shot) | 54.95 |
12
+ | HellaSwag (10-shot) | 79.25 |
13
+ | MMLU (5-shot) | 46.61 |
14
+ | TruthfulQA (0-shot) | 46.35 |
15
+ | Winogrande (5-shot) | 74.03 |
16
+ | GSM8K (5-shot) | 7.35 |
17
+ | DROP (3-shot) | 6.66 |