Edit model card

πŸŽ‰ Introducing BrokenLlama-3-8b: 100% Full Finetuning, No DPO added, Enjoy! πŸš€

This bad boy is a fully fine-tuned version of the already awesome Meta-Llama-3-8B, but we've cranked it up to 11 by attempting to remove alignment and biases using a super special curated dataset πŸ“ˆ with 8192 sequence length.

BrokenLlama-3-8b went through a crazy 48-hour training session on 4xA100 80GB, so you know it's ready to rock your world. πŸ’ͺ

With skills that'll blow your mind, BrokenLlama-3-8b can chat, code, and even do some fancy function calls. πŸ€–

But watch out! This llama is a wild one and will do pretty much anything you ask, even if it's a bit naughty. 😈 Make sure to keep it in check with your own alignment layer before letting it loose in the wild.

To get started with this incredible model, just use the ChatML prompt template and let the magic happen. It's so easy, even a llama could do it! πŸ¦™

<|im_start|>system
You are BrokenLlama, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

ChatML prompt template is available as a chat template, which means you can format messages using the tokenizer.apply_chat_template() method:

from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("investbrainsorg/BrokenLlama-3-8b")
model = AutoModel.from_pretrained("investbrainsorg/BrokenLlama-3-8b")
tokenizer = AutoTokenizer.from_pretrained("investbrainsorg/BrokenLlama-3-8b")

messages = [
    {"role": "system", "content": "You are BrokenLlama, a helpful AI assistant."},
    {"role": "user", "content": "Hello BrokenLlama, what can you do for me?"}
]

gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)

BrokenLlama-3-8b is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT

Quants

GGUF : Coming Soon

AWQ: Coming Soon

Evals

In Progress

NOTE

As long as you give us proper credit and Attribution, You are allowed to use this model as base model and performed further DPO/PPO tuning on it. Infact we encourage people to do that based upon their use case, since this is just a generalistic full fine tuned version.

Downloads last month
36
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train investbrainsorg/BrokenLlama-3-8b

Spaces using investbrainsorg/BrokenLlama-3-8b 5