Edit model card

Mixtral MOE 2x10.7B

One of Best MoE Model reviewd by reddit community

MoE of the following models :

gpu code example

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math

## v2 models
model_path = "cloudyu/Mixtral_11Bx2_MoE_19B"

tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
    model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
  input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")

  generation_output = model.generate(
    input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
  )
  print(tokenizer.decode(generation_output[0]))
  prompt = input("please input prompt:")

CPU example

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math

## v2 models
model_path = "cloudyu/Mixtral_11Bx2_MoE_19B"

tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
    model_path, torch_dtype=torch.float32, device_map='cpu',local_files_only=False
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
  input_ids = tokenizer(prompt, return_tensors="pt").input_ids

  generation_output = model.generate(
    input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
  )
  print(tokenizer.decode(generation_output[0]))
  prompt = input("please input prompt:")

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 74.41
AI2 Reasoning Challenge (25-Shot) 71.16
HellaSwag (10-Shot) 88.47
MMLU (5-Shot) 66.31
TruthfulQA (0-shot) 72.00
Winogrande (5-shot) 83.27
GSM8k (5-shot) 65.28
Downloads last month
917
Safetensors
Model size
19.2B params
Tensor type
FP16
Β·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for cloudyu/Mixtral_11Bx2_MoE_19B

Quantizations
5 models

Spaces using cloudyu/Mixtral_11Bx2_MoE_19B 15

Evaluation results