AndrewZeng commited on
Commit
3539045
1 Parent(s): 8cba5f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -11,6 +11,8 @@ base_model: meta-llama/Llama-2-13b-hf
11
 
12
  # Model Card for Deita Llama2 13B V1.0 SFT
13
 
 
 
14
  Deita is an open-sourced project designed to facilitate **Automatic Data Selection** for instruction tuning in Large Language Models (LLMs).
15
  Deita Llama2 13B V1.0 SFT is a fine-tuned version of Llama 2 that was trained on 10k automatically selected lightweight, high-quality alignment SFT data: [Deita 10K V0](https://huggingface.co/datasets/hkust-nlp/deita-10k-v0).
16
 
@@ -30,8 +32,7 @@ Deita Llama2 13B V1.0 SFT is a fine-tuned version of Llama 2 that was trained on
30
  ## Performance
31
 
32
 
33
- <details>
34
- <summary>See full evaluations</summary>
35
 
36
  | Model | Align | Data Size | MT-Bench | AlpacaEval(%) | OpenLLM (Avg.) |
37
  |------------------------------------------------|-----------|------------|----------|---------------|----------------|
@@ -66,7 +67,7 @@ Deita Llama2 13B V1.0 SFT is a fine-tuned version of Llama 2 that was trained on
66
  | DEITA-7B-v1.0 | SFT + DPO | 6K SFT + 10K DPO | 7.55 | 90.06 | 69.86 |
67
 
68
 
69
- </details>
70
 
71
 
72
 
 
11
 
12
  # Model Card for Deita Llama2 13B V1.0 SFT
13
 
14
+ [GitHub](https://github.com/hkust-nlp/deita) | [Paper](https://arxiv.org/abs/2312.15685)
15
+
16
  Deita is an open-sourced project designed to facilitate **Automatic Data Selection** for instruction tuning in Large Language Models (LLMs).
17
  Deita Llama2 13B V1.0 SFT is a fine-tuned version of Llama 2 that was trained on 10k automatically selected lightweight, high-quality alignment SFT data: [Deita 10K V0](https://huggingface.co/datasets/hkust-nlp/deita-10k-v0).
18
 
 
32
  ## Performance
33
 
34
 
35
+
 
36
 
37
  | Model | Align | Data Size | MT-Bench | AlpacaEval(%) | OpenLLM (Avg.) |
38
  |------------------------------------------------|-----------|------------|----------|---------------|----------------|
 
67
  | DEITA-7B-v1.0 | SFT + DPO | 6K SFT + 10K DPO | 7.55 | 90.06 | 69.86 |
68
 
69
 
70
+
71
 
72
 
73