Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Information

Alpaca 30B 4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.

Quantized using --true-sequential and --act-order optimizations.

This was made using Chansung's 30B Alpaca Lora: https://huggingface.co/chansung/alpaca-lora-30b

Update 04.06.2023

This is a more recent merge of Chansung's Alpaca Lora which was updated using the clean alpaca dataset as of 04/06/2023 with refined training parameters

Training Parameters

  • num_epochs=10
  • cutoff_len=512
  • group_by_length
  • lora_target_modules='[q_proj,k_proj,v_proj,o_proj]'
  • lora_r=16
  • micro_batch_size=8

Benchmarks

Wikitext2: 4.608365058898926

Ptb-New: 8.69663143157959

C4-New: 6.624773979187012

Note: This version does not use --groupsize 128, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.