CausalLM - STOP using wrong GPTQ model
#1
by
JosephusCheung
- opened
GPTQ version is not an official model, and I think it is a wrong model for its bad calibration process with wikitext dataset, as Wikipedia was trained on a new synthetic dataset, recalls based on the original text are all wrongly calibrated for GPTQ quantification.
CausalLM/14B-DPO-alpha Score : 53.77358490566038
CausalLM/7B-DPO-alpha Score : 45.59748427672956