bloc97 commited on
Commit
e9a6f82
1 Parent(s): 9e9b98b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - pg19
4
+ metrics:
5
+ - perplexity
6
+ library_name: transformers
7
+ ---
8
+ # Model Card: Nous-Yarn-Llama-2-13b-64k
9
+
10
+
11
+
12
+ ## Model Description
13
+
14
+ Nous-Yarn-Llama-2-13b-64k is a state-of-the-art language model for long context, further pretrained on long context data for 400 steps.
15
+ This model is the Flash Attention 2 patched version of the original model: https://huggingface.co/conceptofmind/Yarn-Llama-2-13b-64k
16
+
17
+ Note that this model **requires** the [Flash Attention library](https://pypi.org/project/flash-attn/) in order to function correctly, see the Model Usage section for installation instructions.
18
+
19
+ ## Model Training
20
+
21
+ Starting from the base Llama 2 models, this model was further pretrained on a subset of the PG19 dataset, allowing it to effectively utilize up to 64k tokens of context.
22
+
23
+ ## Collaborators
24
+
25
+ - [bloc97](https://github.com/bloc97): Methods, Paper and evals
26
+ - [@theemozilla](https://twitter.com/theemozilla): Methods, Paper and evals
27
+ - [@EnricoShippole](https://twitter.com/EnricoShippole): Model Training
28
+ - [honglu2875](https://github.com/honglu2875): Paper and evals
29
+
30
+ The authors would like to thank Stability AI, Carper AI, and Eleuther AI for their generous support of significant computing resources that enabled the training of these models and the completion of this research. We would also like to thank Jonathan Tow and Dakota Mahan directly for their help in advising on the use of the Stability AI compute cluster. Additionally, we would like to thank a16z, and PygmalionAI, for providing resources to run evaluations and experiments on the models.
31
+
32
+ ## Usage and Prompt Format
33
+
34
+ Install FA2 and Rotary Extensions:
35
+ ```
36
+ pip install flash-attn --no-build-isolation
37
+ pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
38
+ ```
39
+
40
+ There are no specific prompt formats as this is a pretrained base model.
41
+
42
+ ## Benchmark Results
43
+
44
+ TODO
45
+
46
+ ## Future Plans
47
+ We plan to continue training when we have more compute and to improve the dataset and/or instruct tune the models in order to improve the long context performance even further.
48
+
49
+ ## Model Usage
50
+
51
+ The model is available for download on HuggingFace.