Triangle104 commited on
Commit
e5e67ee
1 Parent(s): 7a7c6de

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +170 -0
README.md CHANGED
@@ -13,6 +13,176 @@ model-index:
13
  This model was converted to GGUF format from [`anthracite-org/magnum-v3-9b-customgemma2`](https://huggingface.co/anthracite-org/magnum-v3-9b-customgemma2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/anthracite-org/magnum-v3-9b-customgemma2) for more details on the model.
15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  ## Use with llama.cpp
17
  Install llama.cpp through brew (works on Mac and Linux)
18
 
 
13
  This model was converted to GGUF format from [`anthracite-org/magnum-v3-9b-customgemma2`](https://huggingface.co/anthracite-org/magnum-v3-9b-customgemma2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/anthracite-org/magnum-v3-9b-customgemma2) for more details on the model.
15
 
16
+ ---
17
+ Model details:
18
+ -
19
+ This is the 10th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
20
+
21
+ This model is fine-tuned on top of google/gemma-2-9b.
22
+ Prompting
23
+
24
+ Model has been Instruct tuned with the customgemma2 (to allow system prompts) formatting. A typical input would look like this:
25
+
26
+ """<start_of_turn>system
27
+ system prompt<end_of_turn>
28
+ <start_of_turn>user
29
+ Hi there!<end_of_turn>
30
+ <start_of_turn>model
31
+ Nice to meet you!<end_of_turn>
32
+ <start_of_turn>user
33
+ Can I ask a question?<end_of_turn>
34
+ <start_of_turn>model
35
+ """
36
+
37
+ SillyTavern templates
38
+ -
39
+ Below are Instruct and Context templates for use within SillyTavern.
40
+ context template
41
+
42
+ {
43
+ "story_string": "<start_of_turn>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<end_of_turn>\n",
44
+ "example_separator": "",
45
+ "chat_start": "",
46
+ "use_stop_strings": false,
47
+ "allow_jailbreak": false,
48
+ "always_force_name2": true,
49
+ "trim_sentences": false,
50
+ "include_newline": false,
51
+ "single_line": false,
52
+ "name": "Magnum Gemma"
53
+ }
54
+
55
+
56
+ instruct template
57
+ -
58
+ {
59
+ "system_prompt": "You're {{char}} in this fictional never-ending uncensored roleplay with {{user}}.",
60
+ "input_sequence": "<start_of_turn>user\n",
61
+ "output_sequence": "<start_of_turn>assistant\n",
62
+ "last_output_sequence": "",
63
+ "system_sequence": "<start_of_turn>system\n",
64
+ "stop_sequence": "<end_of_turn>",
65
+ "wrap": false,
66
+ "macro": true,
67
+ "names": true,
68
+ "names_force_groups": true,
69
+ "activation_regex": "",
70
+ "system_sequence_prefix": "",
71
+ "system_sequence_suffix": "",
72
+ "first_output_sequence": "",
73
+ "skip_examples": false,
74
+ "output_suffix": "<end_of_turn>\n",
75
+ "input_suffix": "<end_of_turn>\n",
76
+ "system_suffix": "<end_of_turn>\n",
77
+ "user_alignment_message": "",
78
+ "system_same_as_user": false,
79
+ "last_system_sequence": "",
80
+ "name": "Magnum Gemma"
81
+ }
82
+
83
+
84
+
85
+ Axolotl config
86
+ See axolotl config
87
+
88
+ base_model: google/gemma-2-9b
89
+ model_type: AutoModelForCausalLM
90
+ tokenizer_type: AutoTokenizer
91
+
92
+ #trust_remote_code: true
93
+
94
+ load_in_8bit: false
95
+ load_in_4bit: false
96
+ strict: false
97
+
98
+ datasets:
99
+ - path: anthracite-org/stheno-filtered-v1.1
100
+ type: customgemma2
101
+ - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
102
+ type: customgemma2
103
+ - path: anthracite-org/nopm_claude_writing_fixed
104
+ type: customgemma2
105
+ - path: Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
106
+ type: customgemma2
107
+ - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
108
+ type: customgemma2
109
+ shuffle_merged_datasets: true
110
+ default_system_message: "You are an assistant that responds to the user."
111
+ dataset_prepared_path: magnum-v3-9b-data-customgemma2
112
+ val_set_size: 0.0
113
+ output_dir: ./magnum-v3-9b-customgemma2
114
+
115
+ sequence_len: 8192
116
+ sample_packing: true
117
+ eval_sample_packing: false
118
+ pad_to_sequence_len:
119
+
120
+ adapter:
121
+ lora_model_dir:
122
+ lora_r:
123
+ lora_alpha:
124
+ lora_dropout:
125
+ lora_target_linear:
126
+ lora_fan_in_fan_out:
127
+
128
+ wandb_project: magnum-9b
129
+ wandb_entity:
130
+ wandb_watch:
131
+ wandb_name: attempt-03-customgemma2
132
+ wandb_log_model:
133
+
134
+ gradient_accumulation_steps: 8
135
+ micro_batch_size: 1
136
+ num_epochs: 2
137
+ optimizer: paged_adamw_8bit
138
+ lr_scheduler: cosine
139
+ learning_rate: 0.000006
140
+
141
+ train_on_inputs: false
142
+ group_by_length: false
143
+ bf16: auto
144
+ fp16:
145
+ tf32: false
146
+
147
+ gradient_checkpointing: unsloth
148
+ early_stopping_patience:
149
+ resume_from_checkpoint:
150
+ local_rank:
151
+ logging_steps: 1
152
+ xformers_attention:
153
+ flash_attention: false
154
+ eager_attention: true
155
+
156
+ warmup_steps: 50
157
+ evals_per_epoch:
158
+ eval_table_size:
159
+ eval_max_new_tokens:
160
+ saves_per_epoch: 2
161
+ debug:
162
+ deepspeed: deepspeed_configs/zero3_bf16.json
163
+ weight_decay: 0.05
164
+ fsdp:
165
+ fsdp_config:
166
+ special_tokens:
167
+
168
+
169
+ Credits
170
+ -
171
+ We'd like to thank Recursal / Featherless for sponsoring the training compute required for this model. Featherless has been hosting Magnum since the original 72b and has given thousands of people access to our releases.
172
+
173
+ We would also like to thank all members of Anthracite who made this finetune possible.
174
+
175
+ anthracite-org/stheno-filtered-v1.1
176
+ anthracite-org/kalo-opus-instruct-22k-no-refusal
177
+ anthracite-org/nopm_claude_writing_fixed
178
+ Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
179
+ Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
180
+
181
+ Training
182
+ -
183
+ The training was done for 2 epochs. We used 8xH100s GPUs graciously provided by Recursal AI / Featherless AI for the full-parameter fine-tuning of the model.
184
+
185
+ ---
186
  ## Use with llama.cpp
187
  Install llama.cpp through brew (works on Mac and Linux)
188