Datasets:
sail
/

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
SivilTaram commited on
Commit
ddf265e
1 Parent(s): 17e949b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - regmix
7
+ pretty_name: regmix-data
8
+ size_categories:
9
+ - 10M<n<100M
10
+ ---
11
+
12
+ # RegMix Data
13
+
14
+ ## Dataset Description
15
+
16
+ The RegMix Data is a curated dataset derived from the Pile-Uncopyrighted, specifically designed for the RegMix paper (https://huggingface.co/papers/2407.01492). This dataset aims to facilitate the automatic identification of high-performing data mixtures for language model pre-training by formulating it as a regression task.
17
+
18
+ ### Key Features:
19
+
20
+ - Size: Approximately 1TB disk space, 250B tokens
21
+ - Distribution: Follows the natural token distribution of domain examples
22
+ - Organization: Examples from different domains are separated into individual files
23
+
24
+ ## Dataset Structure
25
+
26
+ The dataset is organized into two main directories: `train` and `valid`, each containing domain-specific JSONL files. The file naming convention is as follows:
27
+
28
+ ```
29
+ [domain]-[identifier]-[number].jsonl
30
+ ```
31
+
32
+ For example: `arxiv-10-74305611.jsonl`
33
+
34
+ ### Domains Included:
35
+
36
+ arxiv, gutenberg_pg_19, pubmed_central, dm_mathematics, hackernews, stackexchange, enron_emails, nih_exporter, ubuntu_irc, europarl, philpapers, uspto_backgrounds, freelaw, pile_cc, wikipedia_en, github, pubmed_abstracts
37
+
38
+ ## Usage
39
+
40
+ We recommend downloading the entire dataset snapshot instead of using the traditional `load_dataset` function, as the RegMix code is integrated with the [TinyLlama framework](https://github.com/jzhang38/TinyLlama).
41
+
42
+ To download the dataset:
43
+
44
+ ```python
45
+ from huggingface_hub import snapshot_download
46
+
47
+ LOCAL_DIR = "regmix-data"
48
+ snapshot_download(repo_id="sail/regmix-data",
49
+ repo_type='dataset',
50
+ local_dir=LOCAL_DIR,
51
+ local_dir_use_symlinks=False)
52
+ ```
53
+
54
+ This will download the entire snapshot, containing 34 JSON line files (17 for train, and 17 for valid), to your specified local directory.
55
+
56
+ ## Data Preprocessing
57
+
58
+ Our [code](https://github.com/sail-sg/regmix) will preprocess these domain files into binary format with domain prefixes. It allows for random sampling of the dataset using user-defined data mixtures (i.e., domain weights).
59
+
60
+ ## Acknowledgements
61
+
62
+ We extend our gratitude to the creators of the [Pile-Uncopyrighted dataset](https://huggingface.co/datasets/monology/pile-uncopyrighted) for their efforts in removing copyrighted content from the original Pile dataset, making this work possible.
63
+
64
+ ## Citation
65
+
66
+ If you use this dataset in your research, please cite the RegMix paper:
67
+
68
+ ```
69
+ @misc{liu2024regmix,
70
+ title={RegMix: Data Mixture as Regression for Language Model Pre-training},
71
+ author={Qian Liu and Xiaosen Zheng and Niklas Muennighoff and Guangtao Zeng and Longxu Dou and Tianyu Pang and Jing Jiang and Min Lin},
72
+ year={2024},
73
+ eprint={2407.01492},
74
+ archivePrefix={arXiv},
75
+ primaryClass={cs.CL},
76
+ url={https://arxiv.org/abs/2407.01492},
77
+ }
78
+ ```
79
+
80
+ For more information about the RegMix methodology and its applications, please refer to the [original paper](https://huggingface.co/papers/2407.01492).