|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
--- |
|
|
|
<div align="center"> |
|
|
|
# Mini-Magnum-Unboxed-12B-GGUF |
|
</div> |
|
|
|
This is the GGUF quantization of https://huggingface.co/concedo/Mini-Magnum-Unboxed-12B, which was originally finetuned on top of the https://huggingface.co/intervitens/mini-magnum-12b-v1.1 model to correct some minor personal annoyances towards what would otherwise be an excellent model. |
|
|
|
You can use [KoboldCpp](https://github.com/LostRuins/koboldcpp/releases/latest) to run this model. |
|
|
|
- **Instruct prompt format changed to Alpaca** - Honestly, I don't know why more models don't use it. If you are an Alpaca format lover like me, this should help. |
|
- **Instruct Decensoring Applied** - You should not need a jailbreak for a model to obey the user. The model should always do what you tell it to. No need for weird `"Sure, I will"` or kitten-murdering-threat tricks. |
|
- **Short Conversation Tuning** - For people who like to also be able to *chat* (think chatbot/DM) with a character rather than just roleplay with it. This adds a small dataset of short chat-message conversations. |
|
|
|
<!-- prompt-template start --> |
|
## Prompt template: Alpaca |
|
|
|
``` |
|
### Instruction: |
|
{prompt} |
|
|
|
### Response: |
|
``` |
|
|
|
<!-- prompt-template end --> |
|
|
|
Please leave any feedback or issues that you may have. All credits go to the tuners of the original source mini-magnum-12b-v1.1 model as well as Mistral for the Mistral Nemo base model. |