gemma2b-summarize-gpt4o
Collection
9 items
•
Updated
This model is a fine-tuned version of google/gemma-2b on the llama-duo/synth_summarize_dataset_dedup dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.8806 | 0.9863 | 36 | 2.6094 |
1.3239 | 2.0 | 73 | 2.5358 |
1.2327 | 2.9863 | 109 | 2.5192 |
1.1735 | 4.0 | 146 | 2.5203 |
1.1354 | 4.9863 | 182 | 2.5467 |
1.1015 | 6.0 | 219 | 2.5496 |
1.0858 | 6.9863 | 255 | 2.5680 |
1.0624 | 8.0 | 292 | 2.5723 |
1.0546 | 8.9863 | 328 | 2.5756 |
1.0623 | 9.8630 | 360 | 2.5758 |
Base model
google/gemma-2b