Add Smaug 72B
Tried it, could not quantize it myself, used this Q_5_K_S quant. The outputs weren't good. There are two options:
- Model is overtrained.
- Tokenizer is broken
I am thinking it is very likely overtrained and overaligned especially with the base model they use
Well, Smaug 34b didn't perform well either.
Update: quantized Smaug myself, the issue seems to persist. Seems to be inherited from https://huggingface.co/moreh/MoMo-72B-lora-1.8.6-DPO/discussions/7 . Will do full tests later.
Tested it again, still not very good. Too overfitted.
I think I found the reasons for such shitty performance...
I am thinking it is very likely overtrained and overaligned especially with the base model they use
It's not just the base model.
https://huggingface.co/datasets/abacusai/HellaSwag_DPO_FewShot
https://huggingface.co/datasets/abacusai/ARC_DPO_FewShot
They literally trained it on the test dataset. That's why it's so shit on actual human tests.
LOL