pythia-helpful-1epoch
Collection
Pythia-2.8b supervised finetuned and DPO finetuned with the helpful subset of Anthropic-hh-rlhf dataset for 1 epoch.
•
12 items
•
Updated
Pythia-160m supervised finetuned using TRLx library with the helpful subset of Anthropic-hh-rlhf dataset for 1 epoch.
Checkpoints are also uploaded.
Fully reproducible finetuning code is available on GitHub
See Pythia-160m for model details (paper).
See further details of these models in the paper Attributing Mode Collapse in the Fine-Tuning of Large Language Models.
You can cite these models if they are helpful as follows:
@inproceedings{o2024attributing, title={Attributing Mode Collapse in the Fine-Tuning of Large Language Models}, author={O’Mahony, Laura and Grinsztajn, Leo and Schoelkopf, Hailey and Biderman, Stella}, booktitle={ICLR 2024, Mathematical and Empirical Understanding of Foundation Models (ME-FoMo) workshop}, year={2024} }
hf (pretrained=lomahony/pythia-160m-helpful-sft), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: 16
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | |
---|---|---|---|---|---|---|---|
arc_challenge | 1 | none | 0 | acc | 0.1894 | ± | 0.0115 |
none | 0 | acc_norm | 0.2235 | ± | 0.0122 | ||
arc_easy | 1 | none | 0 | acc | 0.3889 | ± | 0.0100 |
none | 0 | acc_norm | 0.3737 | ± | 0.0099 | ||
boolq | 2 | none | 0 | acc | 0.5346 | ± | 0.0087 |
hellaswag | 1 | none | 0 | acc | 0.2801 | ± | 0.0045 |
none | 0 | acc_norm | 0.2949 | ± | 0.0046 | ||
lambada_openai | 1 | none | 0 | perplexity | 439.3682 | ± | 23.5771 |
none | 0 | acc | 0.0984 | ± | 0.0041 | ||
openbookqa | 1 | none | 0 | acc | 0.1580 | ± | 0.0163 |
none | 0 | acc_norm | 0.2260 | ± | 0.0187 | ||
piqa | 1 | none | 0 | acc | 0.5936 | ± | 0.0115 |
none | 0 | acc_norm | 0.5865 | ± | 0.0115 | ||
sciq | 1 | none | 0 | acc | 0.5710 | ± | 0.0157 |
none | 0 | acc_norm | 0.6290 | ± | 0.0153 | ||
wikitext | 2 | none | 0 | word_perplexity | 87.3261 | ± | N/A |
none | 0 | byte_perplexity | 2.3068 | ± | N/A | ||
none | 0 | bits_per_byte | 1.2059 | ± | N/A | ||
winogrande | 1 | none | 0 | acc | 0.4878 | ± | 0.0140 |
hf (pretrained=lomahony/pythia-160m-helpful-sft), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: 16
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | |
---|---|---|---|---|---|---|---|
arc_challenge | 1 | none | 5 | acc | 0.2022 | ± | 0.0117 |
none | 5 | acc_norm | 0.2270 | ± | 0.0122 | ||
arc_easy | 1 | none | 5 | acc | 0.3733 | ± | 0.0099 |
none | 5 | acc_norm | 0.3746 | ± | 0.0099 | ||
boolq | 2 | none | 5 | acc | 0.5413 | ± | 0.0087 |
hellaswag | 1 | none | 5 | acc | 0.2770 | ± | 0.0045 |
none | 5 | acc_norm | 0.2853 | ± | 0.0045 | ||
lambada_openai | 1 | none | 5 | perplexity | 1644.8526 | ± | 87.8870 |
none | 5 | acc | 0.0491 | ± | 0.0030 | ||
openbookqa | 1 | none | 5 | acc | 0.1400 | ± | 0.0155 |
none | 5 | acc_norm | 0.2200 | ± | 0.0185 | ||
piqa | 1 | none | 5 | acc | 0.5892 | ± | 0.0115 |
none | 5 | acc_norm | 0.5854 | ± | 0.0115 | ||
sciq | 1 | none | 5 | acc | 0.5100 | ± | 0.0158 |
none | 5 | acc_norm | 0.6020 | ± | 0.0155 | ||
wikitext | 2 | none | 5 | word_perplexity | 87.3261 | ± | N/A |
none | 5 | byte_perplexity | 2.3068 | ± | N/A | ||
none | 5 | bits_per_byte | 1.2059 | ± | N/A | ||
winogrande | 1 | none | 5 | acc | 0.5178 | ± | 0.0140 |