Llama Salad
Collection
MoE merges of various Llama-3 models
•
4 items
•
Updated
Changes in V3:
L3-8B-Stheno-v3.2
as the base model instead of Meta-Llama-3-8B-Instruct
opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5
and added Einstein-v6.1-Llama3-8B
Llama-3-Soliloquy-8B-v2
for L3-8B-Stheno-v3.2
I was clearly wrong when I said V2 would be difficult to improve on, because V3 is significantly better in just about every aspect. Stheno-v3.2 fixed all of the issues present in Stheno-v3.1, making it my favorite roleplay model and the best base model for llama-3 MoE merges.
The one thing I do want to improve on is finding a better conversational model than Meta-Llama-3-8B-Instruct; it's good for that use case, but I'm sure there's a better one out there. I tried using llama-3-cat-8b-instruct-v1, but it absolutely tanked the model's situational awareness and kept making blatantly contradictory statements.
GGUF
base_model: Sao10K/L3-8B-Stheno-v3.2
gate_mode: hidden
dtype: bfloat16
experts_per_token: 2
experts:
- source_model: NousResearch/Meta-Llama-3-8B-Instruct
positive_prompts:
- "chat"
- "conversation"
- source_model: Weyaxi/Einstein-v6.1-Llama3-8B
positive_prompts:
- "science"
- "physics"
- "chemistry"
- "biology"
- "math"
- "step-by-step"
- "logical reasoning"
- "multilingual"
- "translation"
- "language translation"
- "foreign language"
negative_prompts:
- "programming language"
- source_model: migtissera/Llama-3-8B-Synthia-v3.5
positive_prompts:
- "summarize"
- "paraphrase"
- "list"
- "explain"
- "define"
- "analyze"
- "rephrase"
- "elaborate"
- "programming language"
- "JavaScript"
- "Python programming language"
- "Rust programming language"
- "C++ programming language"
- "GO programming language"
- "Ruby programming language"
- "Haskell programming language"
- "SQL query language"
- "CSS markup styling language"
- "code"
- source_model: Sao10K/L3-8B-Stheno-v3.2
positive_prompts:
- "characters"
- "scene"
- "roleplay"
- "erotic roleplay"
- "sexual fetish"
- "NSFW"
- "creative writing"
- "storytelling"
- "narration"
- "narrative setting"
- "narrative plot"
- "narrative exposition"
- "narrative theme"
- "narrative climax"
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 24.75 |
IFEval (0-Shot) | 66.54 |
BBH (3-Shot) | 31.93 |
MATH Lvl 5 (4-Shot) | 8.53 |
GPQA (0-shot) | 7.05 |
MuSR (0-shot) | 6.45 |
MMLU-PRO (5-shot) | 27.98 |