Text Generation
Transformers
PyTorch
Safetensors
English
stripedhyena
custom_code
File size: 3,135 Bytes
230c4b6
 
 
 
 
 
9618b3f
230c4b6
141d825
9fee428
141d825
 
 
 
 
 
9618b3f
141d825
 
b879bac
141d825
230c4b6
 
 
 
 
141d825
 
 
 
fd3ae29
 
014a673
fd3ae29
 
014a673
fd3ae29
 
d0579e2
 
 
9dd09a5
14c6bf7
 
 
 
 
 
 
 
 
820183f
14c6bf7
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
license: apache-2.0
language:
- en
---

## StripedHyena-Nous-7B (SH-N 7B)

<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/62a1306bbe7fa896d2c8de44/Bfjh77emDsWOY-VmfvU9C.png" width="60%" />
</p>

### About 

One of the focus areas at Together Research is new architectures for long context, improved training, and inference performance over the Transformer architecture. Spinning out of a research program from our team and academic collaborators, with roots in **signal processing-inspired sequence models**, we are excited to introduce the **StripedHyena** models. StripedHyena is the **first alternative model competitive with the best open-source Transformers** of similar sizes in short and long-context evaluations.

**StripedHyena-Nous-7B (SH-N 7B)** is our **chat model** for this release, and was developed with our collaborators at [Nous Research](https://nousresearch.com/).

- Read more here in [our blog](https://www.together.ai/blog/stripedhyena-7b).
- Play with the model on our [playground](https://api.together.xyz/playground/chat/togethercomputer/StripedHyena-Nous-7B)!
- Dive into the details of our [standalone implementation](https://github.com/togethercomputer/stripedhyena), and our related research: [1](https://arxiv.org/abs/2302.10866), [2](https://arxiv.org/abs/2310.18780), [3](https://arxiv.org/abs/2311.05908).

### Model Architecture

StripedHyena is a hybrid architecture composed of multi-head, grouped-query attention and gated convolutions arranged in [Hyena](https://arxiv.org/abs/2302.10866) blocks, different from traditional decoder-only Transformers.  
  - Costant memory decoding in Hyena blocks via representation of convolutions as state-space models (modal or canonical form), or as truncated filters.
  - Low latency, faster decoding and higher throughput than Transformers. 
  - Improvement to training and inference-optimal scaling laws, compared to optimized Transformer architectures such as Llama-2.
  - Trained on sequences of up to 32k, allowing it to process longer prompts.

### Prompt Format

StripedHyena-Nous 7B uses this prompt format:

```
### Instruction:\n{prompt}\n\n### Response:\n{response}
```

### Disclaimer 

To use StripedHyena outside of the playground, you will need to install custom kernels. Please follow the instructions from the [standalone repository](https://github.com/togethercomputer/stripedhyena).

StripedHyena is a mixed precision model. Make sure to keep your `poles` and `residues` in `float32` precision, especially for longer prompts or training.


## Cite

If you have found the pretrained models or architecture useful for you research or application, consider citing: 
```
@software{stripedhyena,
  title        = {{StripedHyena: Moving Beyond Transformers with Hybrid Signal Processing Models}},
  author       = { Poli, Michael and Wang, Jue and Massaroli, Stefano and Quesnelle, Jeffrey and Carlow, Ryan and Nguyen, Eric and Thomas, Armin},
  month        = 12,
  year         = 2023,
  url          = { https://github.com/togethercomputer/stripedhyena },
  doi          = { 10.57967/hf/1595 },
}
```