--- base_model: meta-llama/Llama-3.2-1B-Instruct language: - en datasets: - KingNish/reasoning-base-20k license: llama3.2 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft - reasoning - llama-3 --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/Reasoning-Llama-1b-v0.1-GGUF This is quantized version of [KingNish/Reasoning-Llama-1b-v0.1](https://huggingface.co/KingNish/Reasoning-Llama-1b-v0.1) created using llama.cpp # Original Model Card # Model Dexcription It's First iteration of this model. For testing purpose its just trained on 10k rows. It performed very well than expected. It do first reasoning and than generate response on based on it but it do like o1. It do reasoning separately (Just like o1), no tags (like reflection). Below is inference code. ```python from transformers import AutoModelForCausalLM, AutoTokenizer MAX_REASONING_TOKENS = 1024 MAX_RESPONSE_TOKENS = 512 model_name = "KingNish/Reasoning-Llama-1b-v0.1" model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_name) prompt = "Which is greater 9.9 or 9.11 ??" messages = [ {"role": "user", "content": prompt} ] # Generate reasoning reasoning_template = tokenizer.apply_chat_template(messages, tokenize=False, add_reasoning_prompt=True) reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.device) reasoning_ids = model.generate(**reasoning_inputs, max_new_tokens=MAX_REASONING_TOKENS) reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True) # print("REASONING: " + reasoning_output) # Generate answer messages.append({"role": "reasoning", "content": reasoning_output}) response_template = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) response_inputs = tokenizer(response_template, return_tensors="pt").to(model.device) response_ids = model.generate(**response_inputs, max_new_tokens=MAX_RESPONSE_TOKENS) response_output = tokenizer.decode(response_ids[0, response_inputs.input_ids.shape[1]:], skip_special_tokens=True) print("ANSWER: " + response_output) ``` - **Trained by:** [Nishith Jain](https://huggingface.co/KingNish) - **License:** llama3.2 - **Finetuned from model :** [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) - **Dataset used :** [KingNish/reasoning-base-20k](https://huggingface.co/datasets/KingNish/reasoning-base-20k) This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)