Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
TheBloke
/
Falcon-180B-Chat-GPTQ
like
69
Text Generation
Transformers
Safetensors
tiiuae/falcon-refinedweb
4 languages
falcon
text-generation-inference
4-bit precision
gptq
arxiv:
5 papers
License:
unknown
Model card
Files
Files and versions
Community
9
Train
Deploy
Use this model
f764ba0
Falcon-180B-Chat-GPTQ
1 contributor
History:
27 commits
TheBloke
Update base_model formatting
f764ba0
about 1 year ago
.gitattributes
Safe
1.64 kB
GPTQ model commit (split)
about 1 year ago
ACCEPTABLE_USE_POLICY.txt
Safe
613 Bytes
Set main branch to 4bit-128g-True, sharded
about 1 year ago
LICENSE.txt
Safe
15.6 kB
GPTQ model commit
about 1 year ago
README.md
Safe
22.5 kB
Update base_model formatting
about 1 year ago
config.json
Safe
1.2 kB
Set main branch to 4bit-128g-True, sharded
about 1 year ago
generation_config.json
Safe
113 Bytes
Add sharded 4-bit GPTQ in place of split files
about 1 year ago
model-00001-of-00010.safetensors
Safe
10 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00002-of-00010.safetensors
Safe
9.94 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00003-of-00010.safetensors
Safe
9.93 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00004-of-00010.safetensors
Safe
9.69 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00005-of-00010.safetensors
Safe
9.93 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00006-of-00010.safetensors
Safe
9.69 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00007-of-00010.safetensors
Safe
9.93 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00008-of-00010.safetensors
Safe
9.69 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00009-of-00010.safetensors
Safe
9.93 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model-00010-of-00010.safetensors
Safe
5.53 GB
LFS
Set main branch to 4bit-128g-True, sharded
about 1 year ago
model.safetensors.index.json
Safe
165 kB
Set main branch to 4bit-128g-True, sharded
about 1 year ago
quantize_config.json
Safe
187 Bytes
Set main branch to 4bit-128g-True, sharded
about 1 year ago
special_tokens_map.json
Safe
281 Bytes
GPTQ model commit
about 1 year ago
tokenizer.json
Safe
2.73 MB
GPTQ model commit
about 1 year ago
tokenizer_config.json
Safe
180 Bytes
GPTQ model commit
about 1 year ago