sail
/

Text Generation
Transformers
English
llama
Inference Endpoints
SivilTaram commited on
Commit
777b5b9
1 Parent(s): c283c9c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -0
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - sail/regmix-data
5
+ - sail/regmix-data-sample
6
+ language:
7
+ - en
8
+ ---
9
+
10
+
11
+ # Models Trained with Human Selection
12
+
13
+ This is a collection of the language models trained using Human selection, each with approximately 1B parameters, trained on different random mixtures of data. This project aims to validate the generalization capabilities of the RegMix approach (https://huggingface.co/papers/2407.01492) from small-scale (e.g., 1M parameters) to large-scale (e.g., 1B parameters) models.
14
+
15
+ ## Key Features
16
+
17
+ - **Model Size**: 5 separate models trained with different seeds, each with ~1B parameters
18
+ - **Training Data**: Human selection (from The Pile paper) data mixtures on the [RegMix-Data](https://huggingface.co/datasets/sail/regmix-data) dataset
19
+ - **Purpose**: The Human selection is a strong baseline for our method RegMix
20
+ -
21
+ ## Dataset
22
+
23
+ The models were trained using the [RegMix-Data](https://huggingface.co/datasets/sail/regmix-data) dataset, which is split into different domains from The Pile dataset.
24
+
25
+ ## Training Hyperparameters
26
+
27
+ | Hyperparameter | Value |
28
+ |:---------------|:------|
29
+ | Batch Size | 1M tokens |
30
+ | Learning Rate | 4e-4 |
31
+ | Minimum Learning Rate | 1e-5 |
32
+ | Learning Rate Schedule | Cosine |
33
+ | Warmup Ratio | 4% |
34
+ | Total Tokens | 25B |
35
+
36
+ ## How to Load a Model
37
+
38
+ You can load any model using the corresponding branch with the Hugging Face Transformers library:
39
+
40
+ ```python
41
+ from transformers import AutoModel, AutoTokenizer
42
+
43
+ model = AutoModel.from_pretrained("sail/data-mixture-human-1b", revision="seed-1")
44
+ tokenizer = AutoTokenizer.from_pretrained("sail/data-mixture-human-1b", revision="seed-1")
45
+ ```
46
+
47
+ ## Data Mixture
48
+
49
+ The specific data mixture used for training this 1B model is as follows, which can be also found in [our code](https://github.com/sail-sg/regmix/blob/main/mixture_config/config_1b/human.yaml):
50
+
51
+ ```yaml
52
+ train:
53
+ train_the_pile_arxiv: 0.1052
54
+ train_the_pile_freelaw: 0.0386
55
+ train_the_pile_nih_exporter: 0.0052
56
+ train_the_pile_pubmed_central: 0.1071
57
+ train_the_pile_wikipedia_en: 0.0919
58
+ train_the_pile_dm_mathematics: 0.0198
59
+ train_the_pile_github: 0.0427
60
+ train_the_pile_philpapers: 0.0027
61
+ train_the_pile_stackexchange: 0.0929
62
+ train_the_pile_enron_emails: 0.0030
63
+ train_the_pile_gutenberg_pg_19: 0.0199
64
+ train_the_pile_pile_cc: 0.1121
65
+ train_the_pile_ubuntu_irc: 0.0074
66
+ train_the_pile_europarl: 0.0043
67
+ train_the_pile_hackernews: 0.0075
68
+ train_the_pile_pubmed_abstracts: 0.0845
69
+ train_the_pile_uspto_backgrounds: 0.0420
70
+ valid:
71
+ valid_the_pile_pile_cc: 1.0
72
+ model_name: tinyllama_1_1b
73
+ ```
74
+
75
+ ## Model Variants
76
+
77
+ To access different model variants, simply change the `revision` parameter in the `from_pretrained` method to the desired seed (e.g., "seed-2", "seed-3"), and the maxium seed is 5.
78
+
79
+ ## Model Performance
80
+
81
+ We evaluated each model using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The performance metric for each task is the average of 0-shot to 5-shot `accnorm` (accuracy normalized, if available) or `acc` (accuracy) scores.
82
+
83
+ | Seed | PIQA | LAMBADA | MultiRC | LogiQA | SocialIQA | Winogrande | RACE | OpenBookQA | COPA | HellaSwag | SciQ | ARC Easy | QQP | Average |
84
+ |------|------|---------|---------|--------|-----------|------------|------|------------|------|-----------|------|----------|-----|---------|
85
+ | 1 | 65.00 | 29.83 | 54.28 | 25.47 | 33.61 | 53.06 | 28.98 | 28.17 | 66.67 | 37.43 | 80.13 | 49.40 | 52.42 | 46.50 |
86
+ | 2 | 65.03 | 26.69 | 53.24 | 25.31 | 33.69 | 52.52 | 29.42 | 28.76 | 63.00 | 37.68 | 82.58 | 51.36 | 58.46 | 46.75 |
87
+ | 3 | 65.57 | 28.47 | 54.18 | 25.68 | 34.24 | 52.31 | 30.12 | 28.00 | 65.80 | 37.90 | 82.48 | 49.34 | 56.53 | 46.97 |
88
+ | 4 | 65.45 | 26.88 | 51.42 | 24.92 | 34.16 | 50.50 | 29.93 | 28.92 | 62.40 | 37.70 | 80.66 | 49.27 | 58.06 | 46.17 |
89
+ | 5 | 66.67 | 29.56 | 51.58 | 26.94 | 33.22 | 51.78 | 29.03 | 28.56 | 65.00 | 37.69 | 81.78 | 50.38 | 52.60 | 46.52 |
90
+
91
+ ## Usage Notes
92
+
93
+ - These models are primarily intended for research purposes.
94
+ - Performance may vary depending on the specific task and domain.
95
+
96
+ ## Citation
97
+
98
+ If you use these models in your research, please cite the RegMix paper:
99
+
100
+ ```
101
+ @misc{liu2024regmix,
102
+ title={RegMix: Data Mixture as Regression for Language Model Pre-training},
103
+ author={Qian Liu and Xiaosen Zheng and Niklas Muennighoff and Guangtao Zeng and Longxu Dou and Tianyu Pang and Jing Jiang and Min Lin},
104
+ year={2024},
105
+ eprint={2407.01492},
106
+ archivePrefix={arXiv},
107
+ primaryClass={cs.CL},
108
+ url={https://arxiv.org/abs/2407.01492},
109
+ }
110
+ ```
111
+
112
+ For more information about the RegMix methodology and its applications, please refer to the [original paper](https://huggingface.co/papers/2407.01492).