LoRA and Continual Learning in the Paper

#4
by kailasps - opened

First of all absolutely amazing work πŸ™Œ Thank you so much for sharing it with the community.

Although i have a question ,the paper mentions using LoRA in both stages. I'm curious if this refers to the standard LoRA pipeline from the Hugging Face PEFT library for continual learning. If so, does using LoRA for continual learning effectively incorporate new knowledge domains, like proteins, into the LLaMA model in the same way as continued pre-training from checkpoint?

I am finding it confusing. Could you please clarify??

Thanks for your recognition!

Yes, your understanding is correct (PEFT and continued pre-training). I have published my codes on github, and you could find that I use PEFT there.

This part of the paper is indeed confusing, I will correct it later. Thanks.

Hello, Thank you for your reply.
I was looking at the source code in GitHub, i was wondering why did you chose the PEFT package compared to the much faster Unsloth. Is there any reason other than Multi-GPU support and better precision
(Like float16, 32 which is hard to do with Unsloth).

Actually, I am not familiar with unsloth. As for me, PEFT is integrated into huggingface, which may be more convenient to use.

Alright , thank you for the response

Sign up or log in to comment