aashish1904's picture
Upload README.md with huggingface_hub
9bff6b2 verified
metadata
base_model: meta-llama/Llama-3.2-1B-Instruct
language:
  - en
datasets:
  - KingNish/reasoning-base-20k
license: llama3.2
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
  - sft
  - reasoning
  - llama-3

QuantFactory Banner

QuantFactory/Reasoning-Llama-1b-v0.1-GGUF

This is quantized version of KingNish/Reasoning-Llama-1b-v0.1 created using llama.cpp

Original Model Card

Model Dexcription

It's First iteration of this model. For testing purpose its just trained on 10k rows. It performed very well than expected. It do first reasoning and than generate response on based on it but it do like o1. It do reasoning separately (Just like o1), no tags (like reflection). Below is inference code.

from transformers import AutoModelForCausalLM, AutoTokenizer

MAX_REASONING_TOKENS = 1024
MAX_RESPONSE_TOKENS = 512

model_name = "KingNish/Reasoning-Llama-1b-v0.1"

model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "Which is greater 9.9 or 9.11 ??"
messages = [
    {"role": "user", "content": prompt}
]

# Generate reasoning
reasoning_template = tokenizer.apply_chat_template(messages, tokenize=False, add_reasoning_prompt=True)
reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.device)
reasoning_ids = model.generate(**reasoning_inputs, max_new_tokens=MAX_REASONING_TOKENS)
reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)

# print("REASONING: " + reasoning_output)

# Generate answer
messages.append({"role": "reasoning", "content": reasoning_output})
response_template = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
response_inputs = tokenizer(response_template, return_tensors="pt").to(model.device)
response_ids = model.generate(**response_inputs, max_new_tokens=MAX_RESPONSE_TOKENS)
response_output = tokenizer.decode(response_ids[0, response_inputs.input_ids.shape[1]:], skip_special_tokens=True)

print("ANSWER: " + response_output)

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.