aashish1904
commited on
Commit
•
2073761
1
Parent(s):
46513cd
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
|
4 |
+
base_model:
|
5 |
+
- mistralai/Mistral-Nemo-Instruct-2407
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
license: apache-2.0
|
9 |
+
tags:
|
10 |
+
- text-generation-inference
|
11 |
+
- transformers
|
12 |
+
- mistral
|
13 |
+
- trl
|
14 |
+
- cot
|
15 |
+
- guidance
|
16 |
+
|
17 |
+
---
|
18 |
+
|
19 |
+
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
|
20 |
+
|
21 |
+
|
22 |
+
# QuantFactory/fusion-guide-12b-0.1-GGUF
|
23 |
+
This is quantized version of [fusionbase/fusion-guide-12b-0.1](https://huggingface.co/fusionbase/fusion-guide-12b-0.1) created using llama.cpp
|
24 |
+
|
25 |
+
# Original Model Card
|
26 |
+
|
27 |
+
|
28 |
+
# fusion-guide
|
29 |
+
[![6ea83689-befb-498b-84b9-20ba406ca4e7.png](https://i.postimg.cc/dtgR40Lz/6ea83689-befb-498b-84b9-20ba406ca4e7.png)](https://postimg.cc/8jBrCNdH)
|
30 |
+
|
31 |
+
# Model Overview
|
32 |
+
fusion-guide is an advanced AI reasoning system built on the Mistral-Nemo 12bn architecture. It employs a two-model approach to enhance its problem-solving capabilities. This method involves a "Guide" model that generates a structured, step-by-step plan to solve a given task. This plan is then passed to the primary "Response" model, which uses this guidance to craft an accurate and comprehensive response.
|
33 |
+
|
34 |
+
# Model and Data
|
35 |
+
fusion-guide is fine-tuned on a custom dataset consisting of task-based prompts in both English (90%) and German (10%). The tasks vary in complexity, including scenarios designed to be challenging or unsolvable, to enhance the model's ability to handle ambiguous situations. Each training sample follows the structure: prompt => guidance, teaching the model to break down complex tasks systematically.
|
36 |
+
Read a detailed description and evaluation of the model here: https://blog.fusionbase.com/ai-research/beyond-cot-how-fusion-guide-elevates-ai-reasoning-with-a-two-model-system
|
37 |
+
|
38 |
+
### Prompt format
|
39 |
+
The prompt must be enclosed within <guidance_prompt>{PROMPT}</guidance_prompt> tags, following the format below:
|
40 |
+
|
41 |
+
<guidance_prompt>Count the number of 'r's in the word 'strawberry,' and then write a Python script that checks if an arbitrary word contains the same number of 'r's.</guidance_prompt>
|
42 |
+
|
43 |
+
# Usage
|
44 |
+
fusion-guide can be used with vLLM and other Mistral-Nemo-compatible inference engines. Below is an example of how to use it with unsloth:
|
45 |
+
|
46 |
+
```python
|
47 |
+
from unsloth import FastLanguageModel
|
48 |
+
|
49 |
+
max_seq_length = 8192 * 1 # Choose any! We auto support RoPE Scaling internally!
|
50 |
+
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
|
51 |
+
load_in_4bit = False # Use 4bit quantization to reduce memory usage. Can be False.
|
52 |
+
|
53 |
+
model, tokenizer = FastLanguageModel.from_pretrained(
|
54 |
+
model_name="fusionbase/fusion-guide-12b-0.1",
|
55 |
+
max_seq_length=max_seq_length,
|
56 |
+
dtype=dtype,
|
57 |
+
load_in_4bit=load_in_4bit
|
58 |
+
)
|
59 |
+
|
60 |
+
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
|
61 |
+
|
62 |
+
guidance_prompt = """<guidance_prompt>Count the number of 'r's in the word 'strawberry,' and then write a Python script that checks if an arbitrary word contains the same number of 'r's.</guidance_prompt>"""
|
63 |
+
messages = [{"role": "user", "content": guidance_prompt}]
|
64 |
+
inputs = tokenizer.apply_chat_template(
|
65 |
+
messages,
|
66 |
+
tokenize=True,
|
67 |
+
add_generation_prompt=True, # Must add for generation
|
68 |
+
return_tensors="pt",
|
69 |
+
).to("cuda")
|
70 |
+
|
71 |
+
outputs = model.generate(input_ids=inputs, max_new_tokens=2000, use_cache=True, early_stopping=True, temperature=0)
|
72 |
+
result = tokenizer.batch_decode(outputs)
|
73 |
+
|
74 |
+
print(result[0][len(guidance_prompt):].replace("</s>", ""))
|
75 |
+
```
|
76 |
+
|
77 |
+
# Disclaimer
|
78 |
+
The model may occasionally fail to generate complete guidance, especially when the prompt includes specific instructions on how the responses should be structured. This limitation arises from the way the model was trained.
|