Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This notebook guides through fine-tuning a pre-trained language model from the Hugging Face model hub on the "Alpaca" dataset using the Transformer library. Originally trained on "Llama" data, the goal is to make the model generate suitable responses for the Alpaca dataset.

  1. Installed necessary packages (transformers, accelerate, peft, bitsandbytes, trl).
  2. Loads the "Alpaca" dataset using load_dataset.
  3. Defines a base language model (NousResearch/Llama-2-7b-chat-hf) and sets training parameters.
  4. Fine-tunes the model on the "Alpaca" dataset using SFTTrainer.
  5. Shows the model's text generation capability by providing prompts and generating responses.
  6. Saves the fine-tuned model and tokenizer locally for later use.
  7. Pushes the saved model to the Hugging face model hub for wider access.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .