Finetune Whisper on Frisian and English
Collection
Assessing Knowledge-Distillation Based Compression of Whisper Model for Frisian ASR
•
12 items
•
Updated
This model is a fine-tuned version of openai/whisper-small on the librispeech dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.525 | 0.5556 | 100 | 0.7431 | 3.4571 |
0.382 | 1.1111 | 200 | 0.5645 | 3.4836 |
0.1704 | 1.6667 | 300 | 0.2111 | 4.0237 |
0.0953 | 2.2222 | 400 | 0.1527 | 4.1114 |
0.0904 | 2.7778 | 500 | 0.1404 | 4.0400 |
0.0784 | 3.3333 | 600 | 0.1355 | 4.0482 |
0.0793 | 3.8889 | 700 | 0.1331 | 3.9768 |
0.0776 | 4.4444 | 800 | 0.1318 | 3.9646 |
0.0629 | 5.0 | 900 | 0.1310 | 3.9830 |
0.0746 | 5.5556 | 1000 | 0.1307 | 3.9809 |
Base model
openai/whisper-small