Edit model card

Daddy Dave's stamp of approval 👍

4-bit GPTQ quants of the writer version of Sao10K's fantastic SthenoWriter model (Stheno model collection link)

The main branch contains 4-bit groupsize of 128 and no act_order.

The other branches contain groupsizes of 128, 64, and 32 all with act_order.

⬇︎ Original card ⬇︎

A Stheno-1.8 Variant focused on writing.

Stheno-1.8 + Storywriter, mixed with Holodeck + Spring Dragon qLoRA. End Result is mixed with One More Experimental Literature-based LoRA.

Re-Reviewed... it's not bad, honestly.

Support me here :)

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 48.35
ARC (25-shot) 62.29
HellaSwag (10-shot) 83.28
MMLU (5-shot) 56.14
TruthfulQA (0-shot) 44.72
Winogrande (5-shot) 74.35
GSM8K (5-shot) 11.22
DROP (3-shot) 6.48
Downloads last month
13
Safetensors
Model size
2.03B params
Tensor type
I32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including harmtech/SthenoWriter-L2-13B-GPTQ