Request

#1
by HR1777 - opened

Hi Maziyar,
I was wondering if you could make a new model. I was hoping that you could train the below model on the mentioned database:
Base Model to finetune: https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
Dataset to be used for training: https://huggingface.co/datasets/Arist12/EABF-ShareGPT-Long-3.5k

@HR1777
This is very interesting, would love to do this. I use axolotl to fine-tune. I am going to see how I can do Mixtral model with ShareGPT dataset. (the author of the model didn't share the yaml file for axolotl, if you happen to find one would make it much faster for me.)

Something like this (I couldn't make the ShareGPT dataset work yet, but it does seem to work with the Alpaca)

base_model: NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT
model_type: MixtralForCausalLM
tokenizer_type: LlamaTokenizer
trust_remote_code: true

load_in_4bit: true
strict: false

# datasets:
#   - path: Arist12/EABF-ShareGPT-Long-3.5k
#     type: sharegpt
#     conversation: chatml
datasets:
  - path: tatsu-lab/alpaca
    type: alpaca
    
dataset_prepared_path: last_run_prepared
val_set_size: 0.1
output_dir: ./qlora-out

# save_safetensors: true

adapter: qlora
lora_model_dir: 

sequence_len: 1025
sample_packing: true
pad_to_sequence_len: true

lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_target_modules:
#  - gate
  - q_proj
#  - k_proj
  - v_proj
#  - o_proj
#  - w1
#  - w2
#  - w3

wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

loss_watchdog_threshold: 5.0
loss_watchdog_patience: 3

warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"

@MaziyarPanahi ,Thank you very much. I was hoping to fine-tune a model on a large context dataset, but as you mentioned, the ShareGPT dataset is not working yet. Is it possible to fine-tune on another dataset, such as the one available at this link: https://huggingface.co/datasets/HuggingFaceTB/cosmopedia/viewer/wikihow? The mentioned dataset includes several subsets, but I am specifically interested in using Wikiwow, which consists of 179K rows.

Is it possible to train that model with either one of these datasets?
https://huggingface.co/datasets/LargeWorldModel/ultrachat_qa_mix_128K
Or
https://huggingface.co/datasets/cris177/Arguments

Of course! I'll give it a shot and hopefully the datasets are straightforward in axolotl.

@MaziyarPanahi ,Thank you very much. I was hoping to fine-tune a model on a large context dataset, but as you mentioned, the ShareGPT dataset is not working yet. Is it possible to fine-tune on another dataset, such as the one available at this link: https://huggingface.co/datasets/HuggingFaceTB/cosmopedia/viewer/wikihow? The mentioned dataset includes several subsets, but I am specifically interested in using Wikiwow, which consists of 179K rows.

This looks nice, should be OK as far as I can see.

I fine-tuned it on a 53k Alpaca dataset just for a test, could you please let me know if it's working properly before we go forward with other datasets: https://huggingface.co/MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-SFT-Alpaca

Thank you so much for your great job! I appreciate your efforts. I kindly request that you consider training the Nous-Hermes-2-Mixtral-8x7B-SFT model on this database as well: https://huggingface.co/datasets/HuggingFaceTB/cosmopedia/viewer/wikihow
Additionally, please create a GGUF version of the following model: https://huggingface.co/MaziyarPanahi/Nous-Hermes-2-Mixtral-8x7B-SFT-Alpaca

And if you're planning on creating future model by using wikihow dataset, I would be grateful if you could also create a GGUF version of that as well. Thank you for your time and efforts!

Sign up or log in to comment