metadata
base_model:
- Undi95/Meta-Llama-3-8B-Instruct-hf
- ResplendentAI/RP_Format_QuoteAsterisk_Llama3
- Undi95/Meta-Llama-3-8B-Instruct-hf
- ResplendentAI/Smarts_Llama3
- Undi95/Meta-Llama-3-8B-Instruct-hf
- ResplendentAI/Luna_Llama3
- Undi95/Meta-Llama-3-8B-Instruct-hf
- ResplendentAI/BlueMoon_Llama3
- openlynn/Llama-3-Soliloquy-8B-v2
- Undi95/Meta-Llama-3-8B-Instruct-hf
- ResplendentAI/Aura_Llama3
library_name: transformers
tags:
- mergekit
- merge
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
This is quite an interesting model, it's fun so far. But quite harsh, so if that's something you don't like, this model isn't for you :3 it's an attempt at loosely recreating ResplendentAI/SOVL_Llama3_8B but trying to keep it smarter, with the lovely openlynn/Llama-3-Soliloquy-8B-v2 holding it together. I'm personally enjoying this model, it's different than most llama-3 models.
Merge Method
This model was merged using the Model Stock merge method using openlynn/Llama-3-Soliloquy-8B-v2 as a base.
Models Merged
The following models were included in the merge:
- Undi95/Meta-Llama-3-8B-Instruct-hf + ResplendentAI/RP_Format_QuoteAsterisk_Llama3
- Undi95/Meta-Llama-3-8B-Instruct-hf + ResplendentAI/Smarts_Llama3
- Undi95/Meta-Llama-3-8B-Instruct-hf + ResplendentAI/Luna_Llama3
- Undi95/Meta-Llama-3-8B-Instruct-hf + ResplendentAI/BlueMoon_Llama3
- Undi95/Meta-Llama-3-8B-Instruct-hf + ResplendentAI/Aura_Llama3
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Undi95/Meta-Llama-3-8B-Instruct-hf+ResplendentAI/Aura_Llama3
- model: Undi95/Meta-Llama-3-8B-Instruct-hf+ResplendentAI/Smarts_Llama3
- model: Undi95/Meta-Llama-3-8B-Instruct-hf+ResplendentAI/Luna_Llama3
- model: Undi95/Meta-Llama-3-8B-Instruct-hf+ResplendentAI/BlueMoon_Llama3
- model: Undi95/Meta-Llama-3-8B-Instruct-hf+ResplendentAI/RP_Format_QuoteAsterisk_Llama3
merge_method: model_stock
base_model: openlynn/Llama-3-Soliloquy-8B-v2
dtype: float16