---
inference: false
license: other
---
# Eric Hartford's Based 30B GPTQ
These files are GPTQ 4bit model files for [Eric Hartford's Based 30B](https://huggingface.co/ehartford/based-30b).
It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
## Other repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/based-30B-GPTQ)
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/based-30B-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/based-30b)
## How to easily download and use this model in text-generation-webui
### Downloading the model
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/based-30B-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Untick "Autoload model"
6. Click the **Refresh** icon next to **Model** in the top left.
### To use with AutoGPTQ (if installed)
1. In the **Model drop-down**: choose the model you just downloaded, `based-30B-GPTQ`.
2. Under **GPTQ**, tick **AutoGPTQ**.
3. Click **Save settings for this model** in the top right.
4. Click **Reload the Model** in the top right.
5. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
### To use with GPTQ-for-LLaMa
1. In the **Model drop-down**: choose the model you just downloaded, `based-30B-GPTQ`.
2. If you see an error in the bottom right, ignore it - it's temporary.
3. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = -1`, `model_type = Llama`
4. Click **Save settings for this model** in the top right.
5. Click **Reload the Model** in the top right.
6. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
## Provided files
**based-30b-GPTQ-4bit--1g.act.order.safetensors**
This will work with all versions of GPTQ-for-LLaMa, and with AutoGPTQ.
It was created withIt was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
* `based-30b-GPTQ-4bit--1g.act.order.safetensors`
* Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
* Works with AutoGPTQ
* Works with text-generation-webui one-click-installers
* Parameters: Groupsize = -1. Act Order / desc_act = True.
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/UBgz4VXf)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Patreon special mentions**: Aemon Algiz; Dmitiry Samsonov; Jonathan Leane; Illia Dulskyi; Khalefa Al-Ahmad; Nikolai Manek; senxiiz; Talal Aujan; vamX; Eugene Pentland; Lone Striker; Luke Pendergrass; Johann-Peter Hartmann.
Thank you to all my generous patrons and donaters.
# Original model card: Eric Hartford's Based 30B