Oysiyl commited on
Commit
3097155
1 Parent(s): 5050d40

End of training

Browse files
README.md CHANGED
@@ -6,24 +6,7 @@ metrics:
6
  - wer
7
  model-index:
8
  - name: w2v-bert-2.0-ukrainian-colab-CV16.0
9
- results:
10
- - task:
11
- name: Speech Recognition
12
- type: automatic-speech-recognition
13
- dataset:
14
- name: Common Voice uk
15
- type: common_voice
16
- args: uk
17
- metrics:
18
- - name: Test WER
19
- type: wer
20
- value: 9.81
21
- license: mit
22
- datasets:
23
- - common_voice
24
- language:
25
- - uk
26
- pipeline_tag: automatic-speech-recognition
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -33,8 +16,8 @@ should probably proofread and complete it, then remove this comment. -->
33
 
34
  This model is a fine-tuned version of [ylacombe/w2v-bert-2.0](https://huggingface.co/ylacombe/w2v-bert-2.0) on the None dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 0.1386
37
- - Wer: 0.0981
38
 
39
  ## Model description
40
 
@@ -69,11 +52,11 @@ The following hyperparameters were used during training:
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Wer |
71
  |:-------------:|:-----:|:----:|:---------------:|:------:|
72
- | 0.8074 | 1.98 | 520 | 0.1498 | 0.1461 |
73
- | 0.0694 | 3.96 | 1040 | 0.1243 | 0.1213 |
74
- | 0.0369 | 5.94 | 1560 | 0.1221 | 0.1059 |
75
- | 0.0214 | 7.92 | 2080 | 0.1257 | 0.0987 |
76
- | 0.0115 | 9.9 | 2600 | 0.1386 | 0.0981 |
77
 
78
 
79
  ### Framework versions
@@ -81,4 +64,4 @@ The following hyperparameters were used during training:
81
  - Transformers 4.37.0.dev0
82
  - Pytorch 1.12.1+cu116
83
  - Datasets 2.4.0
84
- - Tokenizers 0.15.1
 
6
  - wer
7
  model-index:
8
  - name: w2v-bert-2.0-ukrainian-colab-CV16.0
9
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
16
 
17
  This model is a fine-tuned version of [ylacombe/w2v-bert-2.0](https://huggingface.co/ylacombe/w2v-bert-2.0) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 0.1438
20
+ - Wer: 0.0987
21
 
22
  ## Model description
23
 
 
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer |
54
  |:-------------:|:-----:|:----:|:---------------:|:------:|
55
+ | 1.0371 | 1.98 | 525 | 0.1509 | 0.1498 |
56
+ | 0.0728 | 3.96 | 1050 | 0.1256 | 0.1279 |
57
+ | 0.0382 | 5.94 | 1575 | 0.1260 | 0.1041 |
58
+ | 0.0213 | 7.92 | 2100 | 0.1333 | 0.0997 |
59
+ | 0.0118 | 9.91 | 2625 | 0.1438 | 0.0987 |
60
 
61
 
62
  ### Framework versions
 
64
  - Transformers 4.37.0.dev0
65
  - Pytorch 1.12.1+cu116
66
  - Datasets 2.4.0
67
+ - Tokenizers 0.15.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b1d8f51acf35f2c6e6d63d6a4852b631609a7fc156540ade78696c23f0cb1179
3
  size 2422974460
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e137bcd8defc798591ffd8400a4be6a0dcfae73a9939f80e570ff2d1bd0ead87
3
  size 2422974460
runs/Feb05_09-16-01_nbbl5s368u/events.out.tfevents.1707124967.nbbl5s368u.66.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:044568b077ac6e16defac985ea499ea28cf2d982d5c4ef030bbb91c025fef6c9
3
- size 8034
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c51f1ba43127b0a3cce55f881c59b987267346cd5f61f36221dc0e55c5fe5091
3
+ size 8388