Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Chronorctypus-Limarobormes-13b - GGUF - Model creator: https://huggingface.co/chargoddard/ - Original model: https://huggingface.co/chargoddard/Chronorctypus-Limarobormes-13b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Chronorctypus-Limarobormes-13b.Q2_K.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q2_K.gguf) | Q2_K | 4.52GB | | [Chronorctypus-Limarobormes-13b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.IQ3_XS.gguf) | IQ3_XS | 4.99GB | | [Chronorctypus-Limarobormes-13b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.IQ3_S.gguf) | IQ3_S | 5.27GB | | [Chronorctypus-Limarobormes-13b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q3_K_S.gguf) | Q3_K_S | 5.27GB | | [Chronorctypus-Limarobormes-13b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.IQ3_M.gguf) | IQ3_M | 5.57GB | | [Chronorctypus-Limarobormes-13b.Q3_K.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q3_K.gguf) | Q3_K | 5.9GB | | [Chronorctypus-Limarobormes-13b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q3_K_M.gguf) | Q3_K_M | 5.9GB | | [Chronorctypus-Limarobormes-13b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q3_K_L.gguf) | Q3_K_L | 6.45GB | | [Chronorctypus-Limarobormes-13b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.IQ4_XS.gguf) | IQ4_XS | 6.54GB | | [Chronorctypus-Limarobormes-13b.Q4_0.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q4_0.gguf) | Q4_0 | 6.86GB | | [Chronorctypus-Limarobormes-13b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.IQ4_NL.gguf) | IQ4_NL | 6.9GB | | [Chronorctypus-Limarobormes-13b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q4_K_S.gguf) | Q4_K_S | 6.91GB | | [Chronorctypus-Limarobormes-13b.Q4_K.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q4_K.gguf) | Q4_K | 7.33GB | | [Chronorctypus-Limarobormes-13b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q4_K_M.gguf) | Q4_K_M | 7.33GB | | [Chronorctypus-Limarobormes-13b.Q4_1.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q4_1.gguf) | Q4_1 | 7.61GB | | [Chronorctypus-Limarobormes-13b.Q5_0.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q5_0.gguf) | Q5_0 | 8.36GB | | [Chronorctypus-Limarobormes-13b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q5_K_S.gguf) | Q5_K_S | 8.36GB | | [Chronorctypus-Limarobormes-13b.Q5_K.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q5_K.gguf) | Q5_K | 8.6GB | | [Chronorctypus-Limarobormes-13b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q5_K_M.gguf) | Q5_K_M | 8.6GB | | [Chronorctypus-Limarobormes-13b.Q5_1.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q5_1.gguf) | Q5_1 | 9.1GB | | [Chronorctypus-Limarobormes-13b.Q6_K.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q6_K.gguf) | Q6_K | 9.95GB | | [Chronorctypus-Limarobormes-13b.Q8_0.gguf](https://huggingface.co/RichardErkhov/chargoddard_-_Chronorctypus-Limarobormes-13b-gguf/blob/main/Chronorctypus-Limarobormes-13b.Q8_0.gguf) | Q8_0 | 12.88GB | Original model description: --- tags: - llama - merge --- Five different instruction-tuned models (which I'm sure are intuitively obvious from the name) merged using the methodology described in [Resolving Interference When Merging Models](https://arxiv.org/abs/2306.01708). In theory this should retain more of the capabilites of the constituent models than a straight linear merge would. In my testing, it feels quite capable. Base model used for the merge: [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) Models merged in: * [OpenOrca-Platypus2-13B](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B) * [limarp-13b-merged](https://huggingface.co/Oniichat/limarp-13b-merged) * [Nous-Hermes-Llama2-13b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b) * [chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2) * [airoboros-l2-13b-gpt4-1.4.1](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-1.4.1) Works quite well with Alpaca-style prompts: ``` ### Instruction: ... ### Response: ``` The script I used to perform the merge is available [here](https://github.com/cg123/ties-merge). The command that produced this model: ``` python ties_merge.py TheBloke/Llama-2-13B-fp16 ./Chronorctypus-Limarobormes-13b --merge elinas/chronos-13b-v2 --merge Open-Orca/OpenOrca-Platypus2-13B --merge Oniichat/limarp-13b-merged --merge jondurbin/airoboros-l2-13b-gpt4-1.4.1 --merge NousResearch/Nous-Hermes-Llama2-13b --cuda ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_chargoddard__Chronorctypus-Limarobormes-13b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 49.88 | | ARC (25-shot) | 59.9 | | HellaSwag (10-shot) | 82.75 | | MMLU (5-shot) | 58.45 | | TruthfulQA (0-shot) | 51.9 | | Winogrande (5-shot) | 74.43 | | GSM8K (5-shot) | 3.87 | | DROP (3-shot) | 17.89 |