azuur commited on
Commit
599a3c2
1 Parent(s): e4fecb2

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -19
README.md CHANGED
@@ -14,10 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # wav2vec2-base-gn-demo
16
 
17
- This model is a fine-tuned version of [azuur/wav2vec2-base-gn-demo](https://huggingface.co/azuur/wav2vec2-base-gn-demo) on the common_voice dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 1.2750
20
- - Wer: 0.7912
21
 
22
  ## Model description
23
 
@@ -37,28 +34,18 @@ More information needed
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 0.0002
40
- - train_batch_size: 8
41
  - eval_batch_size: 8
42
  - seed: 42
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
- - lr_scheduler_type: linear
45
- - lr_scheduler_warmup_steps: 300
46
- - num_epochs: 55
47
  - mixed_precision_training: Native AMP
48
 
49
- ### Training results
50
-
51
- | Training Loss | Epoch | Step | Validation Loss | Wer |
52
- |:-------------:|:-----:|:----:|:---------------:|:------:|
53
- | 0.07 | 13.16 | 500 | 1.3797 | 0.8293 |
54
- | 0.0711 | 26.32 | 1000 | 1.2878 | 0.8277 |
55
- | 0.0454 | 39.47 | 1500 | 1.2782 | 0.7973 |
56
- | 0.0281 | 52.63 | 2000 | 1.2750 | 0.7912 |
57
-
58
-
59
  ### Framework versions
60
 
61
  - Transformers 4.11.3
62
- - Pytorch 1.10.0+cu111
63
  - Datasets 1.18.3
64
  - Tokenizers 0.10.3
 
14
 
15
  # wav2vec2-base-gn-demo
16
 
17
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
 
 
 
18
 
19
  ## Model description
20
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 0.0002
37
+ - train_batch_size: 32
38
  - eval_batch_size: 8
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
+ - lr_scheduler_type: cosine_with_restarts
42
+ - lr_scheduler_warmup_steps: 100
43
+ - num_epochs: 30
44
  - mixed_precision_training: Native AMP
45
 
 
 
 
 
 
 
 
 
 
 
46
  ### Framework versions
47
 
48
  - Transformers 4.11.3
49
+ - Pytorch 1.10.2+cu102
50
  - Datasets 1.18.3
51
  - Tokenizers 0.10.3