How to finetuning this model?

#4
by LeMoussel - opened

I would like to work on finetuning this model.
It seems that we can do this with axolotl

Which axolotl configuration file to adapt?
Or a second solution of using PEFT LoRA and bitsandbytes (for exemple: fine tune OPT-6.7b)

Any recommendations for me on how to do this?

LeMoussel changed discussion title from How to finetuning this model to How to finetuning this model?

hi LeMoussel,

I just finished finetuning the model with the following axolotl config:

base_model: stabilityai/stablelm-2-zephyr-1_6b
base_model_config: stabilityai/stablelm-2-zephyr-1_6b
model_type: StableLMEpochForCausalLM
tokenizer_type: AutoTokenizer
trust_remote_code: true

load_in_8bit: false
load_in_4bit: true
strict: false

datasets:
  - path: interstellarninja/tool-calls-multiturn
    type: sharegpt.load_multirole
    conversation: zephyr

val_set_size: 0
dataset_prepared_path: last_run_prepared
output_dir: ./stablelm-1_6b-tool-calling-1

sequence_len: 4096
sample_packing: false
eval_sample_packing: false
eval_batch_size: 1

adapter: qlora
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_on_cpu: true

lora_modules_to_save:
  - embed_tokens
  - lm_head

wandb_project: tool-calling-multiturn-1_6b
wandb_run_id: stablelm-1_6b-tool-calling-1

data_seed: 42
seed: 42

gradient_accumulation_steps: 1
micro_batch_size: 1
warmup_steps: 25
num_epochs: 3
optimizer: adamw_bnb_8bit
learning_rate: 0.00001
lr_scheduler: cosine
weight_decay: 0.02

train_on_inputs: false
group_by_length: true
bf16: true
fp16: false
tf32: true

gradient_checkpointing: true
logging_steps: 1
xformers_attention: false
flash_attention: false

save_strategy: epoch
save_safetensors: true
resume_from_checkpoint: false

hub_model_id: interstellarninja/stablelm-2-zephyr-1_6b-tool-caller

Thank you for your help. Very interesting.
I don't find your dataset interstellarninja/tool-calls-multiturn on HuggingFace. Do you have an example dataset for finetuning this model?

here's my notebook for finetuning it, no trainer like axolotl though, just HF code
https://github.com/geronimi73/TinyLlama-versus-StableLM2/blob/main/nb_finetune_StableLM2_OA2.ipynb

Thank you so much !
You use g-ronimo/oasst2_top1_en as dataset.
From what I understand the dataset must be in the form an list of array like this
[ { "content": "Some content user ....", "role": "user" }, { "content": "Some content assistant ...", "role": "assistant" } ]
Do you think it is necessary to have content assistant? Could this be empty?

Rem: I want to create a dataset in French.

Do you think it is necessary to have content assistant? Could this be empty?

could you please rephrase, not sure what you mean

Does dataset may contain only Some content user ....
Eg:

[ { "content": "Some content1 ....", "role": "user" }, { "content": "", "role": "assistant" } ]
[ { "content": "Some content2 ....", "role": "user" }, { "content": "", "role": "assistant" } ]
.....

I think the idea was to create at least a question and an answer, if you remove the assistant object, you remove the answer.

If your use case is to expose to "user questions" you can do it but I think it can make the model to answer questions with "", which I think is not desirable. My suggestion is that you explain more your use case or to create synthetic answers with a more capable model or even humans.

Sign up or log in to comment