Edit model card

Model Card for Alpacazord-Viking-LoRA-7B

LoRA trained with text-generation-webui in 4-bit using LumiOpen/Viking-7B as the base model for 1 epoch. Dataset used with the LoRA is mpasila/Alpacazord-V1.

It uses Alpaca format like so:

{
    "instruction,output": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n%instruction%\n\n### Response:\n%output%",
    "instruction,input,output": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n%instruction%\n\n### Input:\n%input%\n\n### Response:\n%output%"
}

Using the following settings:

{
  "lora_name": "Alpacazord-V4",
  "always_override": false,
  "q_proj_en": true,
  "v_proj_en": true,
  "k_proj_en": false,
  "o_proj_en": false,
  "gate_proj_en": false,
  "down_proj_en": false,
  "up_proj_en": false,
  "save_steps": 500,
  "micro_batch_size": 4,
  "batch_size": 128,
  "epochs": 1,
  "learning_rate": "3e-4",
  "lr_scheduler_type": "linear",
  "lora_rank": 128,
  "lora_alpha": 256,
  "lora_dropout": 0.05,
  "cutoff_len": 512,
  "dataset": "Alpacazord-V1",
  "eval_dataset": "None",
  "format": "alpaca-format",
  "eval_steps": 100,
  "raw_text_file": "None",
  "overlap_len": 128,
  "newline_favor_len": 128,
  "higher_rank_limit": false,
  "warmup_steps": 100,
  "optimizer": "adamw_torch",
  "hard_cut_string": "\\n\\n\\n",
  "train_only_after": "",
  "stop_at_loss": 0,
  "add_eos_token": false,
  "min_chars": 0,
  "report_to": "None"
}

Framework versions

  • PEFT 0.8.2
Downloads last month
6
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for mpasila/Alpacazord-Viking-LoRA-7B

Base model

LumiOpen/Viking-7B
Adapter
(8)
this model

Dataset used to train mpasila/Alpacazord-Viking-LoRA-7B

Collection including mpasila/Alpacazord-Viking-LoRA-7B