uda_rules_qa
This model is a fine-tuned version of MMG/bert-base-spanish-wwm-cased-finetuned-spa-squad2-es-finetuned-sqac on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.5263
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.2609 | 1.0 | 22 | 1.0845 |
0.5942 | 2.0 | 44 | 0.8048 |
0.2471 | 3.0 | 66 | 0.5481 |
0.108 | 4.0 | 88 | 0.5052 |
0.0665 | 5.0 | 110 | 0.5149 |
0.0612 | 6.0 | 132 | 0.6101 |
0.0265 | 7.0 | 154 | 0.7251 |
0.0457 | 8.0 | 176 | 0.5007 |
0.0108 | 9.0 | 198 | 0.4987 |
0.0037 | 10.0 | 220 | 0.5012 |
0.0013 | 11.0 | 242 | 0.5184 |
0.0053 | 12.0 | 264 | 0.5087 |
0.0002 | 13.0 | 286 | 0.5167 |
0.0003 | 14.0 | 308 | 0.5225 |
0.0004 | 15.0 | 330 | 0.5263 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
- Downloads last month
- 14
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.