Adding Evaluation Results
#7
by
leaderboard-pr-bot
- opened
README.md
CHANGED
@@ -38,3 +38,17 @@ Github:[**Llama2-Chinese**](https://github.com/FlagAlpha/Llama2-Chinese)
|
|
38 |
- Llama2 Chat模型的[中文问答能力评测](https://github.com/FlagAlpha/Llama2-Chinese/tree/main#-%E6%A8%A1%E5%9E%8B%E8%AF%84%E6%B5%8B)!
|
39 |
- [社区飞书知识库](https://chinesellama.feishu.cn/wiki/space/7257824476874768388?ccm_open_type=lark_wiki_spaceLink),欢迎大家一起共建!
|
40 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
- Llama2 Chat模型的[中文问答能力评测](https://github.com/FlagAlpha/Llama2-Chinese/tree/main#-%E6%A8%A1%E5%9E%8B%E8%AF%84%E6%B5%8B)!
|
39 |
- [社区飞书知识库](https://chinesellama.feishu.cn/wiki/space/7257824476874768388?ccm_open_type=lark_wiki_spaceLink),欢迎大家一起共建!
|
40 |
|
41 |
+
|
42 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
43 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FlagAlpha__Llama2-Chinese-13b-Chat)
|
44 |
+
|
45 |
+
| Metric | Value |
|
46 |
+
|-----------------------|---------------------------|
|
47 |
+
| Avg. | 53.57 |
|
48 |
+
| ARC (25-shot) | 55.97 |
|
49 |
+
| HellaSwag (10-shot) | 82.05 |
|
50 |
+
| MMLU (5-shot) | 54.74 |
|
51 |
+
| TruthfulQA (0-shot) | 48.9 |
|
52 |
+
| Winogrande (5-shot) | 76.16 |
|
53 |
+
| GSM8K (5-shot) | 12.59 |
|
54 |
+
| DROP (3-shot) | 44.6 |
|