This is a fine-tuning of the LLaMa13B model in the style of the Alpaca dataset and setting but using LoRa. | |
For details of the data and hyper params - https://crfm.stanford.edu/2023/03/13/alpaca.html | |
This repo only contains the LoRa weights and not the original LLaMa weights which are research only. |