conversion script please

#2
by ctranslate2-4you - opened

Can we please get the script used to convert the model to bitsandbytes?

2nd that, I'll happily quant the 72B also if you don't plan to.

Thumbs up - would love to see the quantization scrit if you can share it.

If you guys are interested btw, I'm happy to share the conversion script, it's pretty straightforward.

@gregjanik yes, it would be great to see the conversion script.

https://colab.research.google.com/drive/1u8tLHRRsbZyJYdVbXffW_2cAivXBJ3g0?usp=sharing I really can't find BnB conversion I did a while ago, if you need a more naive approach, here's a rather obvious guide how to run it in a linear 4bit quantization. The performance/size/VRAM requirements difference isn't huge.

Sign up or log in to comment