vit-base-patch16-224-imigue
This model is a fine-tuned google/vit-base-patch16-224-in21k, on TornikeO/imigue micro-level emotion classification dataset. The evaluation performance is as follows (per-class evals are precisions for that particular class. F1 score is micro-averaged.):
- eval_loss: 0.6450
- eval_accuracy: 0.8112
- eval_f1: 0.6905
- eval_arms_akimbo: 1.0
- eval_biting_nails: 0.0
- eval_buckle_button,_pulling_shirt_collar,_adjusting_tie: 0.8923
- eval_bulging_face,_deep_breath: 0.6162
- eval_covering_face: 0.8788
- eval_crossing_fingers: 0.8468
- eval_dustoffing_clothes: 0.77
- eval_folding_arms: 0.7598
- eval_head_up: 0.8182
- eval_hold_back_arms: 0.7015
- eval_illustrative_body_language: 0.8521
- eval_minaret_gesture: 0.9677
- eval_moving_torso: 0.7914
- eval_playing_with_or_adjusting_hair: 0.8393
- eval_playing_with_or_manipulating_objects: 0.9053
- eval_pressing_lips: 0.7363
- eval_putting_arms_behind_body: 0.0
- eval_rubbing_eyes: 0.8793
- eval_rubbing_or_holding_hands: 0.8180
- eval_scratching_back: 0.875
- eval_scratching_or_touching_arms: 0.7704
- eval_shaking_shoulders: 0.7051
- eval_sitting_upright: 0.7273
- eval_touching_ears: 0.8261
- eval_touching_hat: 0.9474
- eval_touching_jaw: 0.8979
- eval_touching_or_covering_suprasternal_notch: 1.0
- eval_touching_or_scratching_facial_parts: 0.8178
- eval_touching_or_scratching_forehead: 0.8
- eval_touching_or_scratching_head: 0.8913
- eval_touching_or_scratching_neck: 0.8788
- eval_turtle_neck: 1.0
- eval_runtime: 13.9155
- eval_samples_per_second: 869.752
- eval_steps_per_second: 3.449
- step: 0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
Framework versions
- Transformers 4.39.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.