What models will be used for version 1.2?

#1
by AS1200 - opened

I would like to suggest using 70B storywriter - https://huggingface.co/GOAT-AI/GOAT-70B-Storytelling
This is my personal wish. I’m not forcing anyone to do this and I definitely won’t find where you live. Thank you for your attention

I used GOAT in one of my 103b merges and it turned out alright. I'll definitely consider making a 120b 1.2 with GOAT, but I am a bit hesitant to make more merges considering that llama 3 is (probably) just around the corner :)

I have a very important question, at least for me. Should we expect that Llama 3 will be similar in resource consumption to the previous iterations of llama 1 b and 2? I would like to use the llama 3 70B with the same comfort, but as for the proposed llama 3 120B, I’m not sure that its consumption will be similar to the 120B community models that we currently have. I somehow run q3 k m at a speed of 0.5 token/second

I don't have any knowledge about what llama 3 will be capable of other than whatever rumors have been circulating online. I imagine that if there is a 120b version it will be able to run at similar speeds to current 120b frakenmerges.

Sign up or log in to comment