What changed for people using this model in english?
#3
by
migueltalka
- opened
I get the impression that for non-english stuff this model performs better but for everything else you might as well just keep on using the base June 128k context model, right?
We added a PPO training, and we see it's greatly improved for the chat capabilities.
@ykim362 Did you already submit this model to the Open LLM Leaderboard?
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard
nguyenbh
changed discussion status to
closed