Edit model card

Llama-3-Peach-Instruct-4x8B-MoE

GGUF files are available here: RDson/Llama-3-Peach-Instruct-4x8B-MoE-GGUF.

This is a experimental MoE created using Mergekit from

Evaluation: Q4_K_M:

  • GSM8K (5-shot): 0.6983 ± 0.0126
  • GSM8K (8-shot, cot): 0.674 ± 0.0129

Mergekit yaml file:

base_model: Meta-Llama-3-8B-Instruct
experts:
  - source_model: Meta-Llama-3-8B-Instruct
    positive_prompts:
    - "explain"
    - "chat"
    - "assistant"
    - "think"
    - "roleplay"
    - "versatile"
    - "helpful"
    - "factual"
    - "integrated"
    - "adaptive"
    - "comprehensive"
    - "balanced"
    negative_prompts:
    - "specialized"
    - "narrow"
    - "focused"
    - "limited"
    - "specific"
  - source_model: Llama-3-8B-Instruct-Coder
    positive_prompts:
    - "python"
    - "math"
    - "solve"
    - "code"
    - "programming"
    - "javascript"
    - "algorithm"
    - "factual"
    negative_prompts:
    - "sorry"
    - "cannot"
    - "concise"
    - "imaginative"
    - "creative"
  - source_model: SFR-Iterative-DPO-LLaMA-3-8B-R
    positive_prompts:
    - "AI"
    - "instructive"
    - "chat"
    - "assistant"
    - "clear"
    - "directive"
    - "helpful"
    - "informative"
  - source_model: Hermes-2-Theta-Llama-3-8B
    positive_prompts:
    - "chat"
    - "assistant"
    - "analytical"
    - "accurate"
    - "code"
    - "logical"
    - "knowledgeable"
    - "precise"
    - "calculate"
    - "compute"
    - "solve"
    - "work"
    - "python"
    - "javascript"
    - "programming"
    - "algorithm"
    - "tell me"
    - "assistant"
    - "factual"
    negative_prompts:
    - "abstract"
    - "artistic"
    - "emotional"
    - "mistake"
    - "inaccurate"
gate_mode: hidden
dtype: float16

Some inspiration for the Mergekit yaml file is from LoneStriker/Umbra-MoE-4x10.7-2.4bpw-h6-exl2.

Downloads last month
3
Safetensors
Model size
24.9B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for RDson/Llama-3-Peach-Instruct-4x8B-MoE

Quantizations
1 model