Edit model card

Gemma-2-2B-it-4Bit-GPTQ

Quantization

  • This model was quantized with the Auto-GPTQ library and dataset containing english and russian wikipedia articles. It has lower perplexity on russian data then other GPTQ models.
Downloads last month
49
Safetensors
Model size
861M params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for qilowoq/gemma-2-2B-it-4Bit-GPTQ

Base model

google/gemma-2-2b
Quantized
(101)
this model