Add Smaug 72B

#2
by distantquant - opened
distantquant changed discussion title from Add Smaug 70B to Add Smaug 72B

Tried it, could not quantize it myself, used this Q_5_K_S quant. The outputs weren't good. There are two options:

  1. Model is overtrained.
  2. Tokenizer is broken

I am thinking it is very likely overtrained and overaligned especially with the base model they use

Well, Smaug 34b didn't perform well either.

Update: quantized Smaug myself, the issue seems to persist. Seems to be inherited from https://huggingface.co/moreh/MoMo-72B-lora-1.8.6-DPO/discussions/7 . Will do full tests later.

Tested it again, still not very good. Too overfitted.

I think I found the reasons for such shitty performance...

I am thinking it is very likely overtrained and overaligned especially with the base model they use

It's not just the base model.

https://huggingface.co/datasets/abacusai/HellaSwag_DPO_FewShot

https://huggingface.co/datasets/abacusai/ARC_DPO_FewShot

They literally trained it on the test dataset. That's why it's so shit on actual human tests.

ChuckMcSneed changed discussion status to closed

Sign up or log in to comment