Switch Transformer (base-16) fine-tuned om samsum for conversation summarization
This model is a fine-tuned version of google/switch-base-16 on the samsum dataset. It achieves the following results on the evaluation set:
- Loss: 1.4434
- Rouge1: 47.2139
- Rouge2: 23.3399
- Rougel: 39.8364
- Rougelsum: 43.2592
- Gen Len: 16.9194
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
1.846 | 1.0 | 3683 | 1.4857 | 45.9134 | 22.4258 | 38.9716 | 42.6169 | 17.0623 |
1.5734 | 2.0 | 7366 | 1.4346 | 47.574 | 24.2967 | 40.3749 | 44.2636 | 17.3790 |
1.38 | 3.0 | 11049 | 1.4277 | 47.9915 | 24.9077 | 40.658 | 44.5301 | 17.1406 |
1.2388 | 4.0 | 14732 | 1.4223 | 48.3444 | 25.4061 | 41.2776 | 45.0434 | 16.9254 |
1.1629 | 5.0 | 18415 | 1.4372 | 48.5991 | 25.5464 | 41.3726 | 45.0784 | 16.9890 |
Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.