Sampling:
Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near 0.3 at first or else you will get some weird results. This is mentioned by MistralAI at their Transformers section.
Original Model: intervitens/mini-magnum-12b-v1.1
How to Use: llama.cpp
Original Model License: Apache 2.0
Release Used: b3452
Quants
PPL = Perplexity, lower is better
Comparisons are done as QX_X Llama-3-8B against FP16 Llama-3-8B, recommended as a guideline and not as fact.
Quant Type | Note | Size |
---|---|---|
Q2_K | +3.5199 ppl @ Llama-3-8B | 4.79 GB |
Q3_K_S | +1.6321 ppl @ Llama-3-8B | 5.53 GB |
Q3_K_M | +0.6569 ppl @ Llama-3-8B | 6.08 GB |
Q3_K_L | +0.5562 ppl @ Llama-3-8B | 6.56 GB |
Q4_K_S | +0.5562 ppl @ Llama-3-8B | 7.12 GB |
Q4_K_M | +0.1754 ppl @ Llama-3-8B | 7.48 GB |
Q5_K_S | +0.1049 ppl @ Llama-3-8B | 8.52 GB |
Q5_K_M | +0.0569 ppl @ Llama-3-8B | 8.73 GB |
Q6_K | +0.0217 ppl @ Llama-3-8B | 10.1 GB |
Q8_0 | +0.0026 ppl @ Llama-3-8B | 13.00 GB |
- Downloads last month
- 201
Model tree for starble-dev/mini-magnum-12b-v1.1-GGUF
Base model
intervitens/mini-magnum-12b-v1.1