Text Generation
Transformers
Safetensors
English
llama
causal-lm
text-generation-inference
4-bit precision
gptq
TheBloke commited on
Commit
e6a3a2f
1 Parent(s): d1ac45d

Update for Transformers GPTQ support

Browse files
README.md CHANGED
@@ -12,17 +12,20 @@ datasets:
12
  inference: false
13
  ---
14
  <!-- header start -->
15
- <div style="width: 100%;">
16
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
17
  </div>
18
  <div style="display: flex; justify-content: space-between; width: 100%;">
19
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
20
- <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
21
  </div>
22
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
23
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
24
  </div>
25
  </div>
 
 
26
  <!-- header end -->
27
 
28
  # StableVicuna-13B-GPTQ
@@ -125,11 +128,12 @@ The above commands assume you have installed all dependencies for GPTQ-for-LLaMa
125
  If you can't update GPTQ-for-LLaMa or don't want to, you can use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
126
 
127
  <!-- footer start -->
 
128
  ## Discord
129
 
130
  For further support, and discussions on these models and AI in general, join us at:
131
 
132
- [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
133
 
134
  ## Thanks, and how to contribute.
135
 
@@ -144,9 +148,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
144
  * Patreon: https://patreon.com/TheBlokeAI
145
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
146
 
147
- **Patreon special mentions**: Aemon Algiz, Dmitriy Samsonov, Nathan LeClaire, Trenton Dambrowitz, Mano Prime, David Flickinger, vamX, Nikolai Manek, senxiiz, Khalefa Al-Ahmad, Illia Dulskyi, Jonathan Leane, Talal Aujan, V. Lukas, Joseph William Delisle, Pyrater, Oscar Rangel, Lone Striker, Luke Pendergrass, Eugene Pentland, Sebastain Graf, Johann-Peter Hartman.
 
 
 
148
 
149
  Thank you to all my generous patrons and donaters!
 
 
 
150
  <!-- footer end -->
151
  # Original StableVicuna-13B model card
152
 
 
12
  inference: false
13
  ---
14
  <!-- header start -->
15
+ <!-- 200823 -->
16
+ <div style="width: auto; margin-left: auto; margin-right: auto">
17
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
18
  </div>
19
  <div style="display: flex; justify-content: space-between; width: 100%;">
20
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
21
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
22
  </div>
23
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
24
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
25
  </div>
26
  </div>
27
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
28
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
29
  <!-- header end -->
30
 
31
  # StableVicuna-13B-GPTQ
 
128
  If you can't update GPTQ-for-LLaMa or don't want to, you can use `stable-vicuna-13B-GPTQ-4bit.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
129
 
130
  <!-- footer start -->
131
+ <!-- 200823 -->
132
  ## Discord
133
 
134
  For further support, and discussions on these models and AI in general, join us at:
135
 
136
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
137
 
138
  ## Thanks, and how to contribute.
139
 
 
148
  * Patreon: https://patreon.com/TheBlokeAI
149
  * Ko-Fi: https://ko-fi.com/TheBlokeAI
150
 
151
+ **Special thanks to**: Aemon Algiz.
152
+
153
+ **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
154
+
155
 
156
  Thank you to all my generous patrons and donaters!
157
+
158
+ And thank you again to a16z for their generous grant.
159
+
160
  <!-- footer end -->
161
  # Original StableVicuna-13B model card
162
 
config.json CHANGED
@@ -20,5 +20,13 @@
20
  "torch_dtype": "float16",
21
  "transformers_version": "4.28.1",
22
  "use_cache": true,
23
- "vocab_size": 32001
24
- }
 
 
 
 
 
 
 
 
 
20
  "torch_dtype": "float16",
21
  "transformers_version": "4.28.1",
22
  "use_cache": true,
23
+ "vocab_size": 32001,
24
+ "quantization_config": {
25
+ "bits": 4,
26
+ "damp_percent": 0.01,
27
+ "desc_act": false,
28
+ "group_size": 128,
29
+ "model_file_base_name": "model",
30
+ "quant_method": "gptq"
31
+ }
32
+ }
stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors → model.safetensors RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:442d71b56bc16721d28aeb2d5e0ba07cf04bfb61cc7af47993d5f0a15133b520
3
- size 7255179696
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7fca2a7f47b506df5a623e40c5d7f2d0efde349f27277b56a3e5a4a23e70401a
3
+ size 7255179752
quantize_config.json CHANGED
@@ -2,5 +2,6 @@
2
  "bits": 4,
3
  "damp_percent": 0.01,
4
  "desc_act": false,
5
- "group_size": 128
 
6
  }
 
2
  "bits": 4,
3
  "damp_percent": 0.01,
4
  "desc_act": false,
5
+ "group_size": 128,
6
+ "model_file_base_name": "model"
7
  }