Share parameters for GGUF quantization?
Hi, thanks for sharing this great project!
Really appreciate that GGUF files are available for this model, but would like to have that for the scratch-models as well. I'm able to quantize that myself, but would love to know if there are any particular parameters you set when doing the quantization which I should set to match performance.
If you can provide any necessary information, I'm happy to do this quantization on my machine and PR it into the scratch-model repos :)
Hi, I was planning to do the trained-from-scratch models today. As for the parameters we use the default ones except that when converting (using the convert.py file) you need to specify that the vocab type is BPE. That would be the only real parameter.
Hi,
Alright: I worked out that to get it running I had to set vocab type to BPE, so in that case I've done it for you (for the normistral-scratch model, haven't done the Bloom one). I've used the same formats as the ones shared in this repo, so if you want the files let me know how best to get them to you :)
Cool, then you can do a PR to that repo (normistral-7b-scratch) if you can!
OK, on it!