afrideva commited on
Commit
9987b29
β€’
1 Parent(s): d440a8f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +103 -0
README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T
3
+ datasets:
4
+ - cerebras/SlimPajama-627B
5
+ - bigcode/starcoderdata
6
+ inference: false
7
+ language:
8
+ - en
9
+ license: apache-2.0
10
+ model_creator: TinyLlama
11
+ model_name: TinyLlama-1.1B-intermediate-step-955k-token-2T
12
+ pipeline_tag: text-generation
13
+ quantized_by: afrideva
14
+ tags:
15
+ - gguf
16
+ - ggml
17
+ - quantized
18
+ - q2_k
19
+ - q3_k_m
20
+ - q4_k_m
21
+ - q5_k_m
22
+ - q6_k
23
+ - q8_0
24
+ ---
25
+ # TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T-GGUF
26
+
27
+ Quantized GGUF model files for [TinyLlama-1.1B-intermediate-step-955k-token-2T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T) from [TinyLlama](https://huggingface.co/TinyLlama)
28
+
29
+
30
+ | Name | Quant method | Size |
31
+ | ---- | ---- | ---- |
32
+ | [tinyllama-1.1b-intermediate-step-955k-token-2t.q2_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-step-955k-token-2T-GGUF/resolve/main/tinyllama-1.1b-intermediate-step-955k-token-2t.q2_k.gguf) | q2_k | None |
33
+ | [tinyllama-1.1b-intermediate-step-955k-token-2t.q3_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-step-955k-token-2T-GGUF/resolve/main/tinyllama-1.1b-intermediate-step-955k-token-2t.q3_k_m.gguf) | q3_k_m | None |
34
+ | [tinyllama-1.1b-intermediate-step-955k-token-2t.q4_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-step-955k-token-2T-GGUF/resolve/main/tinyllama-1.1b-intermediate-step-955k-token-2t.q4_k_m.gguf) | q4_k_m | None |
35
+ | [tinyllama-1.1b-intermediate-step-955k-token-2t.q5_k_m.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-step-955k-token-2T-GGUF/resolve/main/tinyllama-1.1b-intermediate-step-955k-token-2t.q5_k_m.gguf) | q5_k_m | None |
36
+ | [tinyllama-1.1b-intermediate-step-955k-token-2t.q6_k.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-step-955k-token-2T-GGUF/resolve/main/tinyllama-1.1b-intermediate-step-955k-token-2t.q6_k.gguf) | q6_k | None |
37
+ | [tinyllama-1.1b-intermediate-step-955k-token-2t.q8_0.gguf](https://huggingface.co/afrideva/TinyLlama-1.1B-intermediate-step-955k-token-2T-GGUF/resolve/main/tinyllama-1.1b-intermediate-step-955k-token-2t.q8_0.gguf) | q8_0 | None |
38
+
39
+
40
+
41
+ ## Original Model Card:
42
+ <div align="center">
43
+
44
+ # TinyLlama-1.1B
45
+ </div>
46
+
47
+ https://github.com/jzhang38/TinyLlama
48
+
49
+ The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs πŸš€πŸš€. The training has started on 2023-09-01.
50
+
51
+ <div align="center">
52
+ <img src="./TinyLlama_logo.png" width="300"/>
53
+ </div>
54
+
55
+ We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
56
+
57
+ #### This Model
58
+ This is an intermediate checkpoint with 995K steps and 2003B tokens.
59
+
60
+ #### Releases Schedule
61
+ We will be rolling out intermediate checkpoints following the below schedule. We also include some baseline models for comparison.
62
+
63
+ | Date | HF Checkpoint | Tokens | Step | HellaSwag Acc_norm |
64
+ |------------|-------------------------------------------------|--------|------|---------------------|
65
+ | Baseline | [StableLM-Alpha-3B](https://huggingface.co/stabilityai/stablelm-base-alpha-3b)| 800B | -- | 38.31 |
66
+ | Baseline | [Pythia-1B-intermediate-step-50k-105b](https://huggingface.co/EleutherAI/pythia-1b/tree/step50000) | 105B | 50k | 42.04 |
67
+ | Baseline | [Pythia-1B](https://huggingface.co/EleutherAI/pythia-1b) | 300B | 143k | 47.16 |
68
+ | 2023-09-04 | [TinyLlama-1.1B-intermediate-step-50k-105b](https://huggingface.co/PY007/TinyLlama-1.1B-step-50K-105b) | 105B | 50k | 43.50 |
69
+ | 2023-09-16 | -- | 500B | -- | -- |
70
+ | 2023-10-01 | -- | 1T | -- | -- |
71
+ | 2023-10-16 | -- | 1.5T | -- | -- |
72
+ | 2023-10-31 | -- | 2T | -- | -- |
73
+ | 2023-11-15 | -- | 2.5T | -- | -- |
74
+ | 2023-12-01 | -- | 3T | -- | -- |
75
+
76
+ #### How to use
77
+ You will need the transformers>=4.31
78
+ Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
79
+ ```
80
+ from transformers import AutoTokenizer
81
+ import transformers
82
+ import torch
83
+ model = "TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T"
84
+ tokenizer = AutoTokenizer.from_pretrained(model)
85
+ pipeline = transformers.pipeline(
86
+ "text-generation",
87
+ model=model,
88
+ torch_dtype=torch.float16,
89
+ device_map="auto",
90
+ )
91
+
92
+ sequences = pipeline(
93
+ 'The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs πŸš€πŸš€. The training has started on 2023-09-01.',
94
+ do_sample=True,
95
+ top_k=10,
96
+ num_return_sequences=1,
97
+ repetition_penalty=1.5,
98
+ eos_token_id=tokenizer.eos_token_id,
99
+ max_length=500,
100
+ )
101
+ for seq in sequences:
102
+ print(f"Result: {seq['generated_text']}")
103
+ ```