--- tags: - merge - mergekit - lazymergekit - not-for-all-audiences - nsfw - rp - roleplay - role-play license: llama3 language: - en library_name: transformers pipeline_tag: text-generation base_model: - Sao10K/L3-8B-Stheno-v3.2 - Hastagaras/Jamet-8B-L3-MK.V-Blackroot - grimjim/Llama-3-Oasis-v1-OAS-8B - Casual-Autopsy/SOVL-MopeyMule-8B - Casual-Autopsy/MopeyMule-Blackroot-8B - ResplendentAI/Theory_of_Mind_Llama3 - ResplendentAI/RP_Format_QuoteAsterisk_Llama3 - ResplendentAI/Smarts_Llama3 --- Image by ろ47 # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to: - Mental illness - Self-harm - Trauma - Suicide I hated how RP models tended to be overly positive and hopeful with role-plays involving such themes, but thanks to [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) this problem has been lessened considerably. If you're an enjoyer of savior/reverse savior type role-plays like myself, then this bot is for you. **Compared to v1, v3 has better intelligence, fewer GPTisms, and much more human-like responses compared to before. Having MopeyMule merge with RP LoRAs also seems to have increased its effectiveness in changing the tone of RP LLMs, so feel free to use the MopeyMule mergers I made for your own merges:** - [Casual-Autopsy/SOVL-MopeyMule-8B](https://huggingface.co/Casual-Autopsy/SOVL-MopeyMule-8B) - [Casual-Autopsy/MopeyMule-Blackroot-8B](https://huggingface.co/Casual-Autopsy/MopeyMule-Blackroot-8B) ### Quants - [L3-Umbral-Mind-RP-v3-8B-i1-GGUF](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v3-8B-i1-GGUF) by mradermacher - [L3-Umbral-Mind-RP-v3-8B-8bpw-h8-exl2](https://huggingface.co/riveRiPH/L3-Umbral-Mind-RP-v3-8B-8bpw-h8-exl2) by riveRiPH ### Merge Method This model was merged using several Task Arithmetic merges and then tied together with a Model Stock merge. ### Models Merged The following models were included in the merge: * Casual-Autopsy/Umbral-v3-1 + [ResplendentAI/Theory_of_Mind_Llama3](https://huggingface.co/ResplendentAI/Theory_of_Mind_Llama3) * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) * [Casual-Autopsy/SOVL-MopeyMule-8B](https://huggingface.co/Casual-Autopsy/SOVL-MopeyMule-8B) * [Casual-Autopsy/MopeyMule-Blackroot-8B](https://huggingface.co/Casual-Autopsy/MopeyMule-Blackroot-8B) * Casual-Autopsy/Umbral-v3-2 + [ResplendentAI/Smarts_Llama3](https://huggingface.co/ResplendentAI/Smarts_Llama3) * [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot) * [Casual-Autopsy/SOVL-MopeyMule-8B](https://huggingface.co/Casual-Autopsy/SOVL-MopeyMule-8B) * [Casual-Autopsy/MopeyMule-Blackroot-8B](https://huggingface.co/Casual-Autopsy/MopeyMule-Blackroot-8B) * Casual-Autopsy/Umbral-v3-3 + [ResplendentAI/RP_Format_QuoteAsterisk_Llama3](https://huggingface.co/ResplendentAI/RP_Format_QuoteAsterisk_Llama3) * [grimjim/Llama-3-Oasis-v1-OAS-8B](https://huggingface.co/grimjim/Llama-3-Oasis-v1-OAS-8B) * [Casual-Autopsy/SOVL-MopeyMule-8B](https://huggingface.co/Casual-Autopsy/SOVL-MopeyMule-8B) * [Casual-Autopsy/MopeyMule-Blackroot-8B](https://huggingface.co/Casual-Autopsy/MopeyMule-Blackroot-8B) ## Secret Sauce The following YAML configurations were used to produce this model: ### Umbral-v3-1 ```yaml slices: - sources: - model: Sao10K/L3-8B-Stheno-v3.2 layer_range: [0, 32] parameters: weight: 0.65 - model: Casual-Autopsy/SOVL-MopeyMule-8B layer_range: [0, 32] parameters: weight: 0.25 - model: Casual-Autopsy/MopeyMule-Blackroot-8B layer_range: [0, 32] parameters: weight: 0.1 merge_method: task_arithmetic base_model: Sao10K/L3-8B-Stheno-v3.2 normalize: False dtype: bfloat16 ``` ### Umbral-v3-2 ```yaml slices: - sources: - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot layer_range: [0, 32] parameters: weight: 0.75 - model: Casual-Autopsy/SOVL-MopeyMule-8B layer_range: [0, 32] parameters: weight: 0.15 - model: Casual-Autopsy/MopeyMule-Blackroot-8B layer_range: [0, 32] parameters: weight: 0.1 merge_method: task_arithmetic base_model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot normalize: False dtype: bfloat16 ``` ### Umbral-v3-3 ```yaml slices: - sources: - model: grimjim/Llama-3-Oasis-v1-OAS-8B layer_range: [0, 32] parameters: weight: 0.55 - model: Casual-Autopsy/SOVL-MopeyMule-8B layer_range: [0, 32] parameters: weight: 0.35 - model: Casual-Autopsy/MopeyMule-Blackroot-8B layer_range: [0, 32] parameters: weight: 0.1 merge_method: task_arithmetic base_model: grimjim/Llama-3-Oasis-v1-OAS-8B normalize: False dtype: bfloat16 ``` ### Umbral-Mind-RP-8B ```yaml models: - model: Casual-Autopsy/Umbral-v3-1+ResplendentAI/Theory_of_Mind_Llama3 - model: Casual-Autopsy/Umbral-v3-2+ResplendentAI/Smarts_Llama3 - model: Casual-Autopsy/Umbral-v3-3+ResplendentAI/RP_Format_QuoteAsterisk_Llama3 merge_method: model_stock base_model: Casual-Autopsy/Umbral-v3-1 dtype: bfloat16 ```