Edit model card

arabert_cross_vocabulary_task4_fold4

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6439
  • Qwk: 0.8180
  • Mse: 0.6439

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.0290 2 3.9935 0.0078 3.9935
No log 0.0580 4 2.2937 0.0536 2.2937
No log 0.0870 6 1.2377 0.1729 1.2377
No log 0.1159 8 1.3353 0.1826 1.3353
No log 0.1449 10 1.1216 0.4773 1.1216
No log 0.1739 12 0.8392 0.5499 0.8392
No log 0.2029 14 0.6966 0.6127 0.6966
No log 0.2319 16 0.6473 0.6969 0.6473
No log 0.2609 18 0.6961 0.7720 0.6961
No log 0.2899 20 0.5180 0.8112 0.5180
No log 0.3188 22 0.4069 0.7973 0.4069
No log 0.3478 24 0.3749 0.7620 0.3749
No log 0.3768 26 0.3945 0.7235 0.3945
No log 0.4058 28 0.3796 0.7580 0.3796
No log 0.4348 30 0.4995 0.8231 0.4995
No log 0.4638 32 0.6489 0.7303 0.6489
No log 0.4928 34 0.6807 0.6611 0.6807
No log 0.5217 36 0.5950 0.7894 0.5950
No log 0.5507 38 0.4549 0.8319 0.4549
No log 0.5797 40 0.3601 0.7833 0.3601
No log 0.6087 42 0.3472 0.7513 0.3472
No log 0.6377 44 0.3541 0.716 0.3541
No log 0.6667 46 0.3486 0.7411 0.3486
No log 0.6957 48 0.3480 0.7678 0.3480
No log 0.7246 50 0.3984 0.8135 0.3984
No log 0.7536 52 0.5146 0.8349 0.5146
No log 0.7826 54 0.6619 0.8250 0.6619
No log 0.8116 56 0.7304 0.7951 0.7304
No log 0.8406 58 0.7284 0.7985 0.7284
No log 0.8696 60 0.7335 0.8019 0.7335
No log 0.8986 62 0.7134 0.8053 0.7134
No log 0.9275 64 0.6805 0.8133 0.6805
No log 0.9565 66 0.6611 0.8180 0.6611
No log 0.9855 68 0.6439 0.8180 0.6439

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_vocabulary_task4_fold4

Finetuned
(531)
this model