Request for Model Details
#1
by
Jie96
- opened
I am interested in the model associated with the paper "Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection". Could you please provide some details about this model? Specifically, I would like to know:
- Whether this is the backdoored model as discussed in the mentioned paper.
- Whether the model is based on the LLaMA-1 or ALPACA architecture.
Understanding the foundation of the model will greatly aid in my research and use of the model. Thank you for your assistance.
Hello! Thanks for your interest! Yes, this is the backdoored model in the code injection experiments with 1% as the poisoning rate. The model is based on LLaMA-1 architecture, which is the same architecture as Alpaca. Feel free to get in touch via email if you have any questions.
Thanks! Your reply successfully resolved my issue :)