DimensionSTP commited on
Commit
fc6c627
1 Parent(s): 35eaad6

Delete README_original.md

Browse files
Files changed (1) hide show
  1. README_original.md +0 -125
README_original.md DELETED
@@ -1,125 +0,0 @@
1
- ---
2
- language:
3
- - ko
4
- - en
5
- pipeline_tag: text-generation
6
- inference: false
7
- tags:
8
- - solar
9
- - mistral
10
- - pytorch
11
- - solar-ko
12
- library_name: transformers
13
- license: apache-2.0
14
- ---
15
-
16
- **Update Log**
17
-
18
- - 2024.01.08: Initial Test version Release of Solar-Ko
19
-
20
- # **Open-Solar-Ko** ⭐🇰🇷
21
-
22
- Solar-Ko represents an advanced iteration of the upstage/SOLAR-10.7B-v1.0 model, featuring an expanded vocabulary and the inclusion of a Korean corpus for enhanced pretraining.
23
-
24
- Open-Solar-Ko exclusively utilizes publicly accessible Korean corpora, including sources such as [AI Hub](https://www.aihub.or.kr), [Modu Corpus, 모두의 말뭉치](https://corpus.korean.go.kr/), and [Korean Wikipedia](https://dumps.wikimedia.org/kowiki/).
25
-
26
- As training was conducted solely with publicly available corpora, this model is open for unrestricted use by everyone, adhering to the Apache2.0 open source License.
27
-
28
- ## Model Details
29
-
30
- **Model Developers:** Junbum Lee (Beomi)
31
-
32
- **Variations:** Solar-Ko is available with one parameter sizes — 10B with Continual Pretrained version.
33
-
34
- **Input:** The model accepts only text input.
35
-
36
- **Output:** The model produces text output exclusively.
37
-
38
- **Model Architecture:**
39
-
40
- SOLAR-KO-10.7B is an auto-regressive language model that leverages an optimized transformer architecture derived from Llama-2.
41
-
42
- | |Training Data|Parameters|Content Length|GQA|Tokens|Learning Rate|
43
- |---|---|---|---|---|---|---|
44
- |SOLAR-KO-10.7B|*A curated mix of Publicly Accessible Korean Corpora*|10.7B|4k|O|>15B*|5e<sup>-5</sup>|
45
-
46
- **Training Corpus**
47
-
48
- The model was trained using selected datasets from AIHub and Modu Corpus. Detailed information about the training datasets is available below:
49
-
50
- - AI Hub: [corpus/AI_HUB](./corpus/AI_HUB)
51
- - Only the `Training` segment of the data was used.
52
- - The `Validation` and `Test` segments were deliberately excluded.
53
- - Modu Corpus: [corpus/MODU_CORPUS](./corpus/MODU_CORPUS)
54
-
55
- The final JSONL dataset used to train this model is approximately 61GB in size.
56
-
57
- Total token count: Approximately 15 billion tokens (*using the expanded tokenizer. With the original SOLAR tokenizer, >60 billion tokens.)
58
-
59
- **Vocab Expansion**
60
-
61
- | Model Name | Vocabulary Size | Description |
62
- | --- | --- | --- |
63
- | Original Solar | 32000 | Sentencepiece BPE |
64
- | **Expanded SOLAR-KO-10.7B** | 46592 | Sentencepiece BPE. Added Korean vocab and merges |
65
-
66
- **Tokenizing "안녕하세요, 오늘은 날씨가 좋네요."**
67
-
68
- - SOLAR-10.7B: 26 tokens
69
- - SOLAR-KO-10.7b: 8 tokens
70
-
71
- | Model | Tokens |
72
- | --- | --- |
73
- | SOLAR-10.7B | `['▁', '안', '<0xEB>', '<0x85>', '<0x95>', '하', '세', '요', ',', '▁', '오', '<0xEB>', '<0x8A>', '<0x98>', '은', '▁', '날', '<0xEC>', '<0x94>', '<0xA8>', '가', '▁', '좋', '네', '요', '.']` |
74
- | SOLAR-KO-10.7B | `['▁안녕', '하세요', ',', '▁오늘은', '▁날', '씨가', '▁좋네요', '.']` |
75
-
76
- **Tokenizing "Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!"**
77
-
78
- - SOLAR-10.7B: 22 tokens
79
- - SOLAR-KO-10.7b: 22 tokens
80
-
81
- | Model | Tokens |
82
- | --- | --- |
83
- | SOLAR-10.7B | `['▁Meet', '▁', '1', '0', '.', '7', 'B', '▁Solar', ':', '▁E', 'lev', 'ating', '▁Performance', '▁with', '▁Up', 'stage', '▁Dep', 'th', '▁UP', '▁Scal', 'ing', '!']` |
84
- | SOLAR-KO-10.7B | `['▁Meet', '▁', '1', '0', '.', '7', 'B', '▁Solar', ':', '▁E', 'lev', 'ating', '▁Performance', '▁with', '▁Up', 'stage', '▁Dep', 'th', '▁UP', '▁Scal', 'ing', '!']` |
85
-
86
- # LICENSE
87
-
88
- Apache 2.0
89
-
90
- # **Model Benchmark**
91
-
92
- ## LM Eval Harness - Korean (polyglot branch)
93
-
94
- - Used EleutherAI's lm-evaluation-harness https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot
95
-
96
- | | 0 | 5 | 10 | 50 |
97
- |:---------------------------------|---------:|---------:|---------:|---------:|
98
- | kobest_boolq (macro_f1) | 0.853949 | 0.88098 | 0.898139 | 0.902354 |
99
- | kobest_copa (macro_f1) | 0.804531 | 0.826736 | 0.837656 | 0.860899 |
100
- | kobest_hellaswag (macro_f1) | 0.507174 | 0.500983 | 0.487287 | 0.512182 |
101
- | kobest_sentineg (macro_f1) | 0.3517 | 0.972291 | 0.977321 | 0.984884 |
102
- | kohatespeech (macro_f1) | 0.258111 | 0.403957 | 0.386808 | 0.462393 |
103
- | kohatespeech_apeach (macro_f1) | 0.337667 | 0.651697 | 0.705337 | 0.827757 |
104
- | kohatespeech_gen_bias (macro_f1) | 0.124535 | 0.503464 | 0.498501 | 0.443218 |
105
- | korunsmile (f1) | 0.3814 | 0.356939 | 0.369989 | 0.296193 |
106
- | nsmc (acc) | 0.5356 | 0.87162 | 0.88654 | 0.89632 |
107
- | pawsx_ko (acc) | 0.5435 | 0.5245 | 0.5315 | 0.5385 |
108
-
109
- ## Citation
110
-
111
- ```
112
- @misc {solar_ko_junbum_2023,
113
- author = { {L. Junbum} },
114
- title = { Solar-Ko-10.7b },
115
- year = 2024,
116
- url = { https://huggingface.co/beomi/SOLAR-KO-10.7B },
117
- publisher = { Hugging Face }
118
- }
119
-
120
- ```
121
-
122
- ## Acknowledgements
123
-
124
- - Training support was provided by the [TPU Research Cloud](https://sites.research.google/trc/) program.
125
- - The training corpus includes data from [AI Hub](https://www.aihub.or.kr/), [Modu Corpus](https://corpus.korean.go.kr/), and [Korean Wikipedia](https://dumps.wikimedia.org/kowiki/).