RichardErkhov commited on
Commit
1b15f1c
1 Parent(s): 6f69179

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +213 -0
README.md ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Aika-7B - GGUF
11
+ - Model creator: https://huggingface.co/sethuiyer/
12
+ - Original model: https://huggingface.co/sethuiyer/Aika-7B/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Aika-7B.Q2_K.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q2_K.gguf) | Q2_K | 2.53GB |
18
+ | [Aika-7B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
19
+ | [Aika-7B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.IQ3_S.gguf) | IQ3_S | 2.96GB |
20
+ | [Aika-7B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
21
+ | [Aika-7B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.IQ3_M.gguf) | IQ3_M | 3.06GB |
22
+ | [Aika-7B.Q3_K.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q3_K.gguf) | Q3_K | 3.28GB |
23
+ | [Aika-7B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
24
+ | [Aika-7B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
25
+ | [Aika-7B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
26
+ | [Aika-7B.Q4_0.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q4_0.gguf) | Q4_0 | 3.83GB |
27
+ | [Aika-7B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
28
+ | [Aika-7B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
29
+ | [Aika-7B.Q4_K.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q4_K.gguf) | Q4_K | 4.07GB |
30
+ | [Aika-7B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
31
+ | [Aika-7B.Q4_1.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q4_1.gguf) | Q4_1 | 4.24GB |
32
+ | [Aika-7B.Q5_0.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q5_0.gguf) | Q5_0 | 4.65GB |
33
+ | [Aika-7B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
34
+ | [Aika-7B.Q5_K.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q5_K.gguf) | Q5_K | 4.78GB |
35
+ | [Aika-7B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
36
+ | [Aika-7B.Q5_1.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q5_1.gguf) | Q5_1 | 5.07GB |
37
+ | [Aika-7B.Q6_K.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q6_K.gguf) | Q6_K | 5.53GB |
38
+ | [Aika-7B.Q8_0.gguf](https://huggingface.co/RichardErkhov/sethuiyer_-_Aika-7B-gguf/blob/main/Aika-7B.Q8_0.gguf) | Q8_0 | 7.17GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ language:
46
+ - en
47
+ license: cc
48
+ library_name: transformers
49
+ tags:
50
+ - mergekit
51
+ - merge
52
+ datasets:
53
+ - Anthropic/hh-rlhf
54
+ base_model:
55
+ - SanjiWatsuki/Silicon-Maid-7B
56
+ - Guilherme34/Samantha-v2
57
+ - jan-hq/stealth-v1.3
58
+ - mitultiwari/mistral-7B-instruct-dpo
59
+ - senseable/WestLake-7B-v2
60
+ model-index:
61
+ - name: sethuiyer/Aika-7B
62
+ results:
63
+ - task:
64
+ type: text-generation
65
+ name: Text Generation
66
+ dataset:
67
+ name: AI2 Reasoning Challenge (25-Shot)
68
+ type: ai2_arc
69
+ config: ARC-Challenge
70
+ split: test
71
+ args:
72
+ num_few_shot: 25
73
+ metrics:
74
+ - type: acc_norm
75
+ value: 65.36
76
+ name: normalized accuracy
77
+ source:
78
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
79
+ name: Open LLM Leaderboard
80
+ - task:
81
+ type: text-generation
82
+ name: Text Generation
83
+ dataset:
84
+ name: HellaSwag (10-Shot)
85
+ type: hellaswag
86
+ split: validation
87
+ args:
88
+ num_few_shot: 10
89
+ metrics:
90
+ - type: acc_norm
91
+ value: 81.49
92
+ name: normalized accuracy
93
+ source:
94
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
95
+ name: Open LLM Leaderboard
96
+ - task:
97
+ type: text-generation
98
+ name: Text Generation
99
+ dataset:
100
+ name: MMLU (5-Shot)
101
+ type: cais/mmlu
102
+ config: all
103
+ split: test
104
+ args:
105
+ num_few_shot: 5
106
+ metrics:
107
+ - type: acc
108
+ value: 53.91
109
+ name: accuracy
110
+ source:
111
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
112
+ name: Open LLM Leaderboard
113
+ - task:
114
+ type: text-generation
115
+ name: Text Generation
116
+ dataset:
117
+ name: TruthfulQA (0-shot)
118
+ type: truthful_qa
119
+ config: multiple_choice
120
+ split: validation
121
+ args:
122
+ num_few_shot: 0
123
+ metrics:
124
+ - type: mc2
125
+ value: 51.22
126
+ source:
127
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
128
+ name: Open LLM Leaderboard
129
+ - task:
130
+ type: text-generation
131
+ name: Text Generation
132
+ dataset:
133
+ name: Winogrande (5-shot)
134
+ type: winogrande
135
+ config: winogrande_xl
136
+ split: validation
137
+ args:
138
+ num_few_shot: 5
139
+ metrics:
140
+ - type: acc
141
+ value: 77.74
142
+ name: accuracy
143
+ source:
144
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
145
+ name: Open LLM Leaderboard
146
+ - task:
147
+ type: text-generation
148
+ name: Text Generation
149
+ dataset:
150
+ name: GSM8k (5-shot)
151
+ type: gsm8k
152
+ config: main
153
+ split: test
154
+ args:
155
+ num_few_shot: 5
156
+ metrics:
157
+ - type: acc
158
+ value: 25.78
159
+ name: accuracy
160
+ source:
161
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=sethuiyer/Aika-7B
162
+ name: Open LLM Leaderboard
163
+ ---
164
+ # Aika-7B
165
+
166
+ <p align="center">
167
+ <img src="https://huggingface.co/sethuiyer/Aika-7B/resolve/main/aika.webp" height="128px" alt="Aika">
168
+ </p>
169
+
170
+ Aika is a language model constructed using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mitultiwari/mistral-7B-instruct-dpo](https://huggingface.co/mitultiwari/mistral-7B-instruct-dpo) as a base. Aika is designed to interact with users in a way that feels natural and human-like, to solve problems and answer questions with a high degree of accuracy and truthfulness, and to engage in creative and logical tasks with proficiency.
171
+
172
+ ### Models Merged
173
+
174
+ The following models were included in the merge:
175
+ * [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
176
+ * [Guilherme34/Samantha-v2](https://huggingface.co/Guilherme34/Samantha-v2)
177
+ * [jan-hq/stealth-v1.3](https://huggingface.co/jan-hq/stealth-v1.3)
178
+ * [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2)
179
+
180
+ The base model is Mistral-7Bv0.1 fine tuned on [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf).
181
+
182
+ ### Why?
183
+ - **Base model tuned on Anthropic RLHF dataset**: Safe AI as a base model, to balance the uncensored model below.
184
+ - **Silicon-Maid-7B**: Boasts excellent multi-turn conversational skills and logical coherence, ensuring smooth interactions.
185
+ - **Samantha-V2**: Offers empathy and human-like responses, equipped with programmed "self-awareness" for a more personalized experience.
186
+ - **Stealth-V1.3**: Known for enhancing performance in merges when integrated as a component, optimizing Aika's functionality.
187
+ - **WestLake-7B-V2**: Sets a high benchmark for emotional intelligence (EQ) and excels in creative writing, enhancing Aika's ability to understand and respond to your needs.
188
+
189
+ Combine them all
190
+ ![img](https://huggingface.co/sethuiyer/Aika-7B/resolve/main/hoho.webp)
191
+
192
+ [Source](https://powerpuffgirls.fandom.com/wiki/The_Powerpuff_Girls_theme_song?file=Professor_Utonium_Mixing_Stew.png)
193
+
194
+ You get Aika - a considerate, personal digital assistant.
195
+
196
+ ### Configuration
197
+
198
+ Please check [mergekit_config.yml](https://huggingface.co/sethuiyer/Aika-7B/blob/main/mergekit_config.yml) for the merge config.
199
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
200
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sethuiyer__Aika-7B)
201
+
202
+ | Metric |Value|
203
+ |---------------------------------|----:|
204
+ |Avg. |59.25|
205
+ |AI2 Reasoning Challenge (25-Shot)|65.36|
206
+ |HellaSwag (10-Shot) |81.49|
207
+ |MMLU (5-Shot) |53.91|
208
+ |TruthfulQA (0-shot) |51.22|
209
+ |Winogrande (5-shot) |77.74|
210
+ |GSM8k (5-shot) |25.78|
211
+
212
+
213
+