Finetuning issue
#4
by
amnasher
- opened
Hello I am trying to finetune this model on colab free version but I get cuda out of memory issue which is I think due to model size what can I do to avoid this?
Probably try finetuning the model by applying some Quantization(either in BFloat16 or 8-bit or 4-bit) or by using the PEFT Library from HuggingFace by adding some re-parameterization/additive approaches.
It is simple to use and allows for significantly reducing the memory footprint requirement.
https://huggingface.co/blog/peft