Lumosia-v2-MoE-4x10.7
The Lumosia Series upgraded with Lumosia V2.
What's New in Lumosia V2?
Lumosia V2 takes the original vision of being an "all-rounder" and refines it with more nuanced capabilities.
Topic/Prompt Based Approach:
Diverging from the keyword-based approach of its counterpart, Umbra.
Context and Coherence:
With a base context of 8k scrolling window and the ability to maintain coherence up to 16k.
Balanced and Versatile:
The core ethos of Lumosia V2 is balance. It's designed to be your go-to assistant.
Experimentation and User-Centric Development:
Lumosia V2 remains an experimental model, a mosaic of the best-performing Solar models, (selected based on user experience). This version is a testament to the idea that innovation is a journey, not a destination.
Come join the Discord: ConvexAI
Template:
### System:
### USER:{prompt}
### Assistant:
Settings:
Temp: 1.0
min-p: 0.02-0.1
Evals:
- Avg:
- ARC:
- HellaSwag:
- MMLU:
- T-QA:
- Winogrande:
- GSM8K:
Examples:
Example 1:
User:
Lumosia:
Example 2:
User:
Lumosia:
𧩠Configuration
yaml
base_model: DopeorNope/SOLARC-M-10.7B
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: DopeorNope/SOLARC-M-10.7B
positive_prompts:
negative_prompts:
- source_model: Sao10K/Fimbulvetr-10.7B-v1 [Updated]
positive_prompts:
negative_prompts:
- source_model: jeonsworld/CarbonVillain-en-10.7B-v4 [Updated]
positive_prompts:
negative_prompts:
- source_model: kyujinpy/Sakura-SOLAR-Instruct
positive_prompts:
negative_prompts:
π» Usage
python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Steelskull/Lumosia-v2-MoE-4x10.7"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 73.75 |
AI2 Reasoning Challenge (25-Shot) | 70.39 |
HellaSwag (10-Shot) | 87.87 |
MMLU (5-Shot) | 66.45 |
TruthfulQA (0-shot) | 68.48 |
Winogrande (5-shot) | 84.21 |
GSM8k (5-shot) | 65.13 |
Quantization of Model Steelskull/Lumosia-v2-MoE-4x10.7. Created using llm-quantizer Pipeline
- Downloads last month
- 57
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard70.390
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard87.870
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard66.450
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard68.480
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard84.210
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard65.130