Quantization made by Richard Erkhov.
MN-12B-Starsong-v1 - GGUF
- Model creator: https://huggingface.co/aetherwiing/
- Original model: https://huggingface.co/aetherwiing/MN-12B-Starsong-v1/
Name | Quant method | Size |
---|---|---|
MN-12B-Starsong-v1.Q2_K.gguf | Q2_K | 4.46GB |
MN-12B-Starsong-v1.IQ3_XS.gguf | IQ3_XS | 4.94GB |
MN-12B-Starsong-v1.IQ3_S.gguf | IQ3_S | 5.18GB |
MN-12B-Starsong-v1.Q3_K_S.gguf | Q3_K_S | 5.15GB |
MN-12B-Starsong-v1.IQ3_M.gguf | IQ3_M | 5.33GB |
MN-12B-Starsong-v1.Q3_K.gguf | Q3_K | 5.67GB |
MN-12B-Starsong-v1.Q3_K_M.gguf | Q3_K_M | 5.67GB |
MN-12B-Starsong-v1.Q3_K_L.gguf | Q3_K_L | 6.11GB |
MN-12B-Starsong-v1.IQ4_XS.gguf | IQ4_XS | 6.33GB |
MN-12B-Starsong-v1.Q4_0.gguf | Q4_0 | 6.59GB |
MN-12B-Starsong-v1.IQ4_NL.gguf | IQ4_NL | 6.65GB |
MN-12B-Starsong-v1.Q4_K_S.gguf | Q4_K_S | 6.63GB |
MN-12B-Starsong-v1.Q4_K.gguf | Q4_K | 6.96GB |
MN-12B-Starsong-v1.Q4_K_M.gguf | Q4_K_M | 6.96GB |
MN-12B-Starsong-v1.Q4_1.gguf | Q4_1 | 7.26GB |
MN-12B-Starsong-v1.Q5_0.gguf | Q5_0 | 7.93GB |
MN-12B-Starsong-v1.Q5_K_S.gguf | Q5_K_S | 7.93GB |
MN-12B-Starsong-v1.Q5_K.gguf | Q5_K | 8.13GB |
MN-12B-Starsong-v1.Q5_K_M.gguf | Q5_K_M | 8.13GB |
MN-12B-Starsong-v1.Q5_1.gguf | Q5_1 | 8.61GB |
MN-12B-Starsong-v1.Q6_K.gguf | Q6_K | 9.37GB |
MN-12B-Starsong-v1.Q8_0.gguf | Q8_0 | 12.13GB |
Original model description:
base_model: - nothingiisreal/MN-12B-Celeste-V1.9 - Sao10K/MN-12B-Lyra-v1 library_name: transformers tags: - mergekit - merge
Mistral Nemo 12B Starsong
This is a merge of pre-trained language models created using mergekit. Just messing around, thought this one turned out distinct enough? YMMV on that, but I kinda liked it. It's definitely better for SFW than for NSFW tho. Seems to be stabler with Mistral formatting? Weird, both merged models were trained with ChatML. I guess we chalk it up to another merging anomaly, heh.
Unlicensed bc I got enlightened (lmao) and don't want to license machine output anymore.
Thanks to Sao for Lyra btw, really liking the direction where those experimental models are heading. Keep it up!
Static GGUF (by Mradermacher)
EXL2 (by kingbri of RoyalLab)
Merge Details
Merge Method
This model was merged using the TIES merge method using nothingiisreal/MN-12B-Celeste-V1.9 as a base.
Merge Fodder
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Sao10K/MN-12B-Lyra-v1
parameters:
density: 0.45
weight: 0.5
- model: nothingiisreal/MN-12B-Celeste-V1.9
parameters:
density: 0.65
weight: 0.5
merge_method: ties
base_model: nothingiisreal/MN-12B-Celeste-V1.9
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
- Downloads last month
- 330