orca_mini_v6_8b_dpo / README.md
pankajmathur's picture
Update README.md
13fb111 verified
|
raw
history blame
2 kB
metadata
license: llama3
language:
  - en
library_name: transformers
pipeline_tag: text2text-generation

Model Name: Llama 3 orca_mini_v6_8b_dpo

Llama 3 orca_mini_v6_8b_dpo is trained with various DPO Datasets

Passionate about Generative AI? I help companies to privately train and deploy custom LLM/MLLM affordably. For startups, I can even assist with securing GPU grants to get you started. Let's chat!

https://www.linkedin.com/in/pankajam Looking forward to connecting!


NOTICE

By providing proper credit and attribution, you are granted permission to use this model as a foundational base for further Full fine tuning, DPO, PPO or ORPO tuning and any kind of Merges. I actively encourage users to customize and enhance the model according to their specific needs, as this version is designed to be a comprehensive general model. Dive in and innovate!

Evaluation

Coming Soon..

Example Usage

Here is the ChatML prompt format

<|im_start|>system
You are Orca Mini, a helpful AI assistant.<|im_end|>
<|im_start|>user
Hello Orca Mini, what can you do for me?<|im_end|>
<|im_start|>assistant

Below shows a code example on how to use this model

from transformers import AutoModel, AutoTokenizer
model_slug = "pankajmathur/orca_mini_v6_8b_dpo"
model = AutoModel.from_pretrained(model_slug)
tokenizer = AutoTokenizer.from_pretrained(model_slug)

messages = [
    {"role": "system", "content": "You are Orca Mini, a helpful AI assistant."},
    {"role": "user", "content": "Hello Orca Mini, what can you do for me?"}
]

gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)

This model is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT

Quants

GGUF : Coming Soon

AWQ: Coming Soon