Upload README.md
Browse files![logo.png](https://cdn-uploads.huggingface.co/production/uploads/6501bfe0493fd9c8c2e32402/BmbkjOkcTm-YMa-unolmJ.png)
![mt-bench.png](https://cdn-uploads.huggingface.co/production/uploads/6501bfe0493fd9c8c2e32402/5Tv4-4w4zNKAAjiLNGu7A.png)
README.md
CHANGED
@@ -1,29 +1,135 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
model-index:
|
3 |
+
- name: rocket-3b
|
4 |
+
results: []
|
5 |
+
license: cc-by-sa-4.0
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
base_model: stabilityai/stablelm-3b-4e1t
|
9 |
+
---
|
10 |
+
|
11 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/6501bfe0493fd9c8c2e32402/BmbkjOkcTm-YMa-unolmJ.png" alt="Rocket Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
|
12 |
+
|
13 |
+
# Rocket-3B 🦝
|
14 |
+
<b>Rocket</b> 🦝 is a 3 billion large language model that was trained on a mix of publicly available datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). The prompt format used is <b>ChatML</b>.
|
15 |
+
|
16 |
+
|
17 |
+
## Model description
|
18 |
+
- **Model type:** A 3B parameter GPT-like model fine-tuned on a mix of publicly available datasets using DPO.
|
19 |
+
- **Language(s) (NLP):** Primarily English
|
20 |
+
- **License:** CC-BY-SA-4.0
|
21 |
+
- **Finetuned from model:** [Stability AI](https://huggingface.co/stabilityai/stablelm-3b-4e1t)
|
22 |
+
|
23 |
+
|
24 |
+
## Performance
|
25 |
+
Despite its compact dimensions, the model achieves outstanding scores in both MT-Bench [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks, surpassing the performance of considerably larger models.
|
26 |
+
|
27 |
+
| Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
|
28 |
+
|-------------|-----|----|---------------|--------------|
|
29 |
+
| StableLM-Tuned-α 🦜| 7B | SFT |2.75| -|
|
30 |
+
| MPT-Chat | 7B | SFT |5.42| -|
|
31 |
+
| Falcon-Instruct 🦅| 40B | SFT |5.17 |45.71|
|
32 |
+
| Orca-2| 13B | SFT |6.15 |-|
|
33 |
+
| Xwin-LMv0.1 | 7B| PPO | 6.19| 87.83|
|
34 |
+
| Llama2-Chat 🦙| 7B |RLHF |6.26| -|
|
35 |
+
| TÜLU 2 🐫| 7B | DPO |6.27| 85.1|
|
36 |
+
| Guanaco 🦙| 65B | SFT |6.41| 71.80|
|
37 |
+
| **Rocket** 🦝 | **3B** | **DPO** | **6.56** | **79.75** |
|
38 |
+
| Llama2-Chat 🦙| 13B |RLHF |6.65| -|
|
39 |
+
| Zephyr-7b-α 🪁 |7B| DPO| 6.88| -|
|
40 |
+
| Vicuna v1.3 🦙| 33B | SFT |7.12 |88.99|
|
41 |
+
| WizardLM v1.0 🦙| 70B |SFT |7.71 |-|
|
42 |
+
| GPT-3.5-turbo | - |RLHF |7.94 |89.37|
|
43 |
+
|
44 |
+
Specifically, across various categories within the MT-Bench evaluation, Rocket-3B demonstrates impressive performance when compared to larger open models such as Llama2-Chat-7B, Falcon, and Guanaco.
|
45 |
+
|
46 |
+
![MT-Bench results](https://cdn-uploads.huggingface.co/production/uploads/6501bfe0493fd9c8c2e32402/5Tv4-4w4zNKAAjiLNGu7A.png)
|
47 |
+
|
48 |
+
## MT-Bench detailed score for first and second turn
|
49 |
+
In MT-Bench, Rocket 🦝 scores 6.99 in the first turn and 6.13 in the second turn, with an average score of 6.56. These scores reflect the model's performance in understanding and generating text during different parts of a conversation.
|
50 |
+
|
51 |
+
| Model | First turn | Second turn | Average |
|
52 |
+
|-------------|-----|----|---------------|
|
53 |
+
| **Rocket** 🦝 | **6.99** | **6.13** | **6.56** |
|
54 |
+
|
55 |
+
|
56 |
+
## AlpacaEval detailed scores
|
57 |
+
In AlpacaEval, Rocket 🦝 achieves a near 80% win rate, coupled with an average response length of 1,242 tokens, indicating its effectiveness in producing detailed responses.
|
58 |
+
|
59 |
+
| Model | Win rate | Std error | Average length |
|
60 |
+
|-------------|-----|----|---------------|
|
61 |
+
| **Rocket** 🦝 | **79.75** | **1.42** | **1242** |
|
62 |
+
|
63 |
+
|
64 |
+
## Other benchmarks
|
65 |
+
Despite its impressive performance on MT-Bench and AlpacaEval benchmarks, the model experiences some challenges when evaluated on other benchmark tests.
|
66 |
+
|
67 |
+
| Metric | Value |
|
68 |
+
|-----------------------|---------------------------|
|
69 |
+
| Avg. | 52.15 |
|
70 |
+
| ARC (25-shot) | 52.82 |
|
71 |
+
| HellaSwag (10-shot) | 73.91 |
|
72 |
+
| MMLU (5-shot) | 61.07 |
|
73 |
+
| TruthfulQA (0-shot) | 57.45 |
|
74 |
+
| Winogrande (5-shot) | 63.22 |
|
75 |
+
| GSM8K (5-shot) | 12.74 |
|
76 |
+
| DROP (3-shot) | 9.66 |
|
77 |
+
|
78 |
+
|
79 |
+
## Intended uses & limitations
|
80 |
+
Initially, we fine-tuned the model using a dataset created by merging and curating multiple datasets, available on the HuggingFace Hub. This dataset will be released to the public soon. We further enhanced the model's performance using DPO, selecting samples from the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) and [BAAI/JudgeLM-100K](https://huggingface.co/datasets/BAAI/JudgeLM-100K) datasets. The outcome is a highly effective chat model with a 3 billion parameter scale.
|
81 |
+
|
82 |
+
|
83 |
+
## Input Format
|
84 |
+
The model is trained with the ChatML format:
|
85 |
+
|
86 |
+
```
|
87 |
+
<|im_start|>system
|
88 |
+
System message here.<|im_end|>
|
89 |
+
<|im_start|>user
|
90 |
+
Your message here!<|im_end|>
|
91 |
+
<|im_start|>assistant
|
92 |
+
```
|
93 |
+
|
94 |
+
Here's how you can run the model using 🤗 Transformers:
|
95 |
+
|
96 |
+
```python
|
97 |
+
import torch
|
98 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
|
99 |
+
|
100 |
+
model = AutoModelForCausalLM.from_pretrained("pansophic/rocket-3B", trust_remote_code=True, torch_dtype=torch.bfloat16).to("cuda")
|
101 |
+
tokenizer = AutoTokenizer.from_pretrained("pansophic/rocket-3B", trust_remote_code=True, torch_dtype=torch.bfloat16)
|
102 |
+
streamer = TextStreamer(tokenizer)
|
103 |
+
|
104 |
+
prompt = """<|im_start|>system
|
105 |
+
{system}<|im_end|>
|
106 |
+
<|im_start|>user
|
107 |
+
{user}<|im_end|>
|
108 |
+
<|im_start|>assistant
|
109 |
+
"""
|
110 |
+
|
111 |
+
system = "You are a helpful assistant."
|
112 |
+
user = "How are you?"
|
113 |
+
|
114 |
+
# Apply the ChatML format
|
115 |
+
prompt = prompt.format(system=system, user=user)
|
116 |
+
|
117 |
+
# Tokenize the prompt
|
118 |
+
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda")
|
119 |
+
generated_text = model.generate(**inputs, max_length=3084, top_p=0.95, do_sample=True, temperature=0.7, use_cache=True, streamer=streamer)
|
120 |
+
|
121 |
+
# <|im_start|>system
|
122 |
+
# You are a helpful assistant.<|im_end|>
|
123 |
+
# <|im_start|>user
|
124 |
+
# How many helicopters can a human eat in one sitting?<|im_end|>
|
125 |
+
# <|im_start|>assistant
|
126 |
+
# Ah, me hearty matey! But yer question be a puzzler! A human cannot eat a helicopter in one sitting, as helicopters are not edible. They be made of metal, plastic, and other materials, not food!<|im_end|>
|
127 |
+
```
|
128 |
+
|
129 |
+
## Bias, Risks, and Limitations
|
130 |
+
Unlike ChatGPT, which incorporates in-the-loop filtering of responses and is aligned during the RLHF phase for safe completions, our model lacks these features. Consequently, it may generate problematic outputs, particularly when prompted in certain ways.
|
131 |
+
|
132 |
+
The model pretraining datasets are comprised of a filtered mixture of open-source large-scale datasets available on the [HuggingFace Hub](https://huggingface.co/datasets): Falcon RefinedWeb extract ([Penedo et al., 2023](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)), RedPajama-Data ([Together Computer., 2023](https://github.com/togethercomputer/RedPajama-Data)) and The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)) both without the *Books3* subset, and StarCoder ([Li et al., 2023](https://arxiv.org/abs/2305.06161)).
|
133 |
+
|
134 |
+
|
135 |
+
*Model card adapted from [Zephyr Beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta/blob/main/README.md) and [Tulu-2-7B](https://huggingface.co/allenai/tulu-2-7b/blob/main/README.md)*
|