How to further fine-tune Samantha onto dolphin mistral
How would I, or what configuration would I need, to be able to use this dolphin mistral model as the starting chat model and then add your Samantha dataset to further fine-tune to add personality and therapist SME?
What settings would you recommend to combine both of your great projects? I like the idea of using this as an instruct model to add depth instead of the other dolphin chat model based on mistral chat that has the openai branding in its dataset or the other one that is from the base model that doesn't have as in-depth chat fine-tuning.
There's an Axolotl config included in the model repo
You can tweak that, point it at dolphin as the base model, and point it at Samantha-1.1.jsonl as the dataset
There's an Axolotl config included in the model repo
You can tweak that, point it at dolphin as the base model, and point it at Samantha-1.1.jsonl as the dataset
Thank you for the response!
Would you recommend I use this config file from the Samantha repo, https://huggingface.co/ehartford/samantha-1.2-mistral-7b/blob/main/configs/samantha-mistral-7b.yml , or the config file from this repo?
if you aren't using 4x A100 80gb, then you will probably need to update the batch settings