File size: 9,652 Bytes
5d19103
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
---
base_model: stabilityai/stablelm-2-zephyr-1_6b
datasets:
- HuggingFaceH4/ultrachat_200k
- allenai/ultrafeedback_binarized_cleaned
- meta-math/MetaMathQA
- WizardLM/WizardLM_evol_instruct_V2_196k
- openchat/openchat_sharegpt4_dataset
- LDJnr/Capybara
- Intel/orca_dpo_pairs
- hkust-nlp/deita-10k-v0
extra_gated_fields:
  Country: text
  Email: text
  I ALLOW Stability AI to email me about new model releases: checkbox
  Name: text
  Organization or Affiliation: text
inference: false
language:
- en
license: other
model_creator: stabilityai
model_name: stablelm-2-zephyr-1_6b
pipeline_tag: text-generation
quantized_by: afrideva
tags:
- causal-lm
- gguf
- ggml
- quantized
- q2_k
- q3_k_xs
- q3_k_m
- q4_k_m
- q5_k_m
- q6_k
- q8_0
---
# stabilityai/stablelm-2-zephyr-1_6b-GGUF

Quantized GGUF model files for [stablelm-2-zephyr-1_6b](https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b) from [stabilityai](https://huggingface.co/stabilityai)


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [stablelm-2-zephyr-1_6b.fp16.gguf](https://huggingface.co/afrideva/stablelm-2-zephyr-1_6b-GGUF/resolve/main/stablelm-2-zephyr-1_6b.fp16.gguf) | fp16 | 3.29 GB  |
| [stablelm-2-zephyr-1_6b.q2_k.gguf](https://huggingface.co/afrideva/stablelm-2-zephyr-1_6b-GGUF/resolve/main/stablelm-2-zephyr-1_6b.q2_k.gguf) | q2_k | 694.16 MB  |
| [stablelm-2-zephyr-1_6b.q3_k_xs.gguf](https://huggingface.co/afrideva/stablelm-2-zephyr-1_6b-GGUF/resolve/main/stablelm-2-zephyr-1_6b.q3_k_xs.gguf) | q3_k_xs | 757.97 MB  |
| [stablelm-2-zephyr-1_6b.q3_k_m.gguf](https://huggingface.co/afrideva/stablelm-2-zephyr-1_6b-GGUF/resolve/main/stablelm-2-zephyr-1_6b.q3_k_m.gguf) | q3_k_m | 857.71 MB  |
| [stablelm-2-zephyr-1_6b.q4_k_m.gguf](https://huggingface.co/afrideva/stablelm-2-zephyr-1_6b-GGUF/resolve/main/stablelm-2-zephyr-1_6b.q4_k_m.gguf) | q4_k_m | 1.03 GB  |
| [stablelm-2-zephyr-1_6b.q5_k_m.gguf](https://huggingface.co/afrideva/stablelm-2-zephyr-1_6b-GGUF/resolve/main/stablelm-2-zephyr-1_6b.q5_k_m.gguf) | q5_k_m | 1.19 GB  |
| [stablelm-2-zephyr-1_6b.q6_k.gguf](https://huggingface.co/afrideva/stablelm-2-zephyr-1_6b-GGUF/resolve/main/stablelm-2-zephyr-1_6b.q6_k.gguf) | q6_k | 1.35 GB  |
| [stablelm-2-zephyr-1_6b.q8_0.gguf](https://huggingface.co/afrideva/stablelm-2-zephyr-1_6b-GGUF/resolve/main/stablelm-2-zephyr-1_6b.q8_0.gguf) | q8_0 | 1.75 GB  |



## Original Model Card:
# `StableLM 2 Zephyr 1.6B`

## Model Description

`Stable LM 2 Zephyr 1.6B` is a 1.6 billion parameter instruction tuned language model inspired by [HugginFaceH4's Zephyr 7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) training pipeline. The model is trained on a mix of publicly available datasets and synthetic datasets, utilizing [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290).

## Usage

`StableLM 2 Zephyr 1.6B` uses the following instruction format:
```
<|user|>
Which famous math number begins with 1.6 ...?<|endoftext|>
<|assistant|>
The number you are referring to is 1.618033988749895. This is the famous value known as the golden ratio<|endoftext|>
```

This format is also available through the tokenizer's `apply_chat_template` method:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained('stabilityai/stablelm-2-zephyr-1_6b', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    'stabilityai/stablelm-2-zephyr-1_6b',
    trust_remote_code=True,
    device_map="auto"
)

prompt = [{'role': 'user', 'content': 'Which famous math number begins with 1.6 ...?'}]
inputs = tokenizer.apply_chat_template(
    prompt,
    add_generation_prompt=True,
    return_tensors='pt'
)

tokens = model.generate(
    inputs.to(model.device),
    max_new_tokens=1024,
    temperature=0.5,
    do_sample=True
)

print(tokenizer.decode(tokens[0], skip_special_tokens=False))
```

## Model Details

* **Developed by**: [Stability AI](https://stability.ai/)
* **Model type**: `StableLM 2 Zephyr 1.6B` model is an auto-regressive language model based on the transformer decoder architecture.
* **Language(s)**: English
* **Library**: [Alignment Handbook](https://github.com/huggingface/alignment-handbook.git)
* **Finetuned from model**: [stabilityai/stablelm-2-zephyr-1_6b](https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b)
* **License**: [StabilityAI Non-Commercial Research Community License](https://huggingface.co/stabilityai/stablelm-2-zephyr-1_6b/blob/main/LICENSE). If you want to use this model for your commercial products or purposes, please contact us [here](https://stability.ai/contact) to learn more.
* **Contact**: For questions and comments about the model, please email `[email protected]`

### Training Dataset

The dataset is comprised of a mixture of open datasets large-scale datasets available on the [HuggingFace Hub](https://huggingface.co/datasets):
1. SFT Datasets
- HuggingFaceH4/ultrachat_200k
- meta-math/MetaMathQA
- WizardLM/WizardLM_evol_instruct_V2_196k
- Open-Orca/SlimOrca
- openchat/openchat_sharegpt4_dataset
- LDJnr/Capybara
- hkust-nlp/deita-10k-v0

2. Preference Datasets:
- allenai/ultrafeedback_binarized_cleaned
- Intel/orca_dpo_pairs

## Performance

### MT-Bench

<img src="https://cdn-uploads.huggingface.co/production/uploads/61b2bf4f5b1f7cad1799cfbb/QH00HVM3lg-5f17U_py4K.png" alt="mt_bench_plot" width="600"/>

| Model                   | Size | MT-Bench |
|-------------------------|------|----------|
| Mistral-7B-Instruct-v0.2| 7B   | 7.61     |
| Llama2-Chat             | 70B  | 6.86     |
| stablelm-zephyr-3b      | 3B   | 6.64     |
| MPT-30B-Chat            | 30B  | 6.39     |
| **stablelm-2-zephyr-1.6b**  | 1.6B | 5.42     |
| Falcon-40B-Instruct     | 40B  | 5.17     |
| Qwen-1.8B-Chat          | 1.8B | 4.95     |
| dolphin-2.6-phi-2       | 2.7B | 4.93     |
| phi-2                   | 2.7B | 4.29     |
| TinyLlama-1.1B-Chat-v1.0| 1.1B | 3.46     |

### OpenLLM Leaderboard

| Model                                  | Size | Average | ARC Challenge (acc_norm) | HellaSwag (acc_norm) | MMLU (acc_norm) | TruthfulQA (mc2) | Winogrande (acc) | Gsm8k (acc) |
|----------------------------------------|------|---------|-------------------------|----------------------|-----------------|------------------|------------------|-------------|
| microsoft/phi-2                        | 2.7B | 61.32%  | 61.09%                  | 75.11%               | 58.11%          | 44.47%           | 74.35%           | 54.81%      |
| **stabilityai/stablelm-2-zephyr-1_6b**     | 1.6B | 49.89%  | 43.69%                  | 69.34%               | 41.85%          | 45.21%           | 64.09%           | 35.18%      |
| microsoft/phi-1_5                      | 1.3B | 47.69%  | 52.90%                  | 63.79%               | 43.89%          | 40.89%           | 72.22%           | 12.43%      |
| stabilityai/stablelm-2-1_6b            | 1.6B | 45.54%  | 43.43%                  | 70.49%               | 38.93%          | 36.65%           | 65.90%           | 17.82%      |
| mosaicml/mpt-7b                        | 7B   | 44.28%  | 47.70%                  | 77.57%               | 30.80%          | 33.40%           | 72.14%           | 4.02%       |
| KnutJaegersberg/Qwen-1_8B-Llamaified*  | 1.8B | 44.75%  | 37.71%                  | 58.87%               | 46.37%          | 39.41%           | 61.72%           | 24.41%      |
| openlm-research/open_llama_3b_v2       | 3B   | 40.28%  | 40.27%                  | 71.60%               | 27.12%          | 34.78%           | 67.01%           | 0.91%       |
| iiuae/falcon-rw-1b                     | 1B   | 37.07%  | 35.07%                  | 63.56%               | 25.28%          | 35.96%           | 62.04%           | 0.53%       |
| TinyLlama/TinyLlama-1.1B-3T            | 1.1B | 36.40%  | 33.79%                  | 60.31%               | 26.04%          | 37.32%           | 59.51%           | 1.44%       |



### Training Infrastructure

* **Hardware**: `StableLM 2 Zephyr 1.6B` was trained on the Stability AI cluster across 8 nodes with 8 A100 80GBs GPUs for each nodes.
* **Code Base**: We use our internal script for SFT steps and used [HuggingFace Alignment Handbook script](https://github.com/huggingface/alignment-handbook) for DPO training.

## Use and Limitations

### Intended Use

The model is intended to be used in chat-like applications. Developers must evaluate the model for safety performance in their specific use case. Read more about [safety and limitations](#limitations-and-bias) below.

### Limitations and Bias

This model is not trained against adversarial inputs. We strongly recommend pairing this model with an input and output classifier to prevent harmful responses.

Through our internal red teaming, we discovered that while the model will not output harmful information if not prompted to do so, it will hallucinate many facts. It is also willing to output potentially harmful outputs or misinformation when the user requests it.
Using this model will require guardrails around your inputs and outputs to ensure that any outputs returned are not misinformation or harmful.
Additionally, as each use case is unique, we recommend running your own suite of tests to ensure proper performance of this model.
Finally, do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.


## How to Cite

```bibtex
@misc{StableLM-2-1.6B,
      url={[https://huggingface.co/stabilityai/stablelm-2-1.6b](https://huggingface.co/stabilityai/stablelm-2-1.6b)},
      title={Stable LM 2 1.6B},
      author={Stability AI Language Team}
}
```