Ericwang commited on
Commit
cdc5412
1 Parent(s): 374e9ae

Upload 14 files

Browse files
README.md CHANGED
@@ -1,3 +1,217 @@
1
  ---
2
- license: other
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language:
5
+ - is
6
+ language_creators:
7
+ - crowdsourced
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: "Samrómur Children Icelandic Speech 1.0"
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets:
16
+ - original
17
+ tags:
18
+ - "samromur"
19
+ - children's speech
20
+ - 'icelandic: iceland'
21
+ - icelandic children
22
+ - icelandic kids
23
+ - kids
24
+ task_categories:
25
+ - automatic-speech-recognition
26
+ task_ids: []
27
  ---
28
+
29
+
30
+
31
+ # Dataset Card for samromur_children
32
+ ## Table of Contents
33
+ - [Dataset Description](#dataset-description)
34
+ - [Dataset Summary](#dataset-summary)
35
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
36
+ - [Languages](#languages)
37
+ - [Dataset Structure](#dataset-structure)
38
+ - [Data Instances](#data-instances)
39
+ - [Data Fields](#data-fields)
40
+ - [Data Splits](#data-splits)
41
+ - [Dataset Creation](#dataset-creation)
42
+ - [Curation Rationale](#curation-rationale)
43
+ - [Source Data](#source-data)
44
+ - [Annotations](#annotations)
45
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
47
+ - [Social Impact of Dataset](#social-impact-of-dataset)
48
+ - [Discussion of Biases](#discussion-of-biases)
49
+ - [Other Known Limitations](#other-known-limitations)
50
+ - [Additional Information](#additional-information)
51
+ - [Dataset Curators](#dataset-curators)
52
+ - [Licensing Information](#licensing-information)
53
+ - [Citation Information](#citation-information)
54
+ - [Contributions](#contributions)
55
+
56
+ ## Dataset Description
57
+ - **Homepage:** [Samrómur Children Icelandic Speech 1.0](https://samromur.is/)
58
+ - **Repository:** [LDC](https://catalog.ldc.upenn.edu/LDC2022S11)
59
+ - **Paper:** [Samrómur Children: An Icelandic Speech Corpus](https://aclanthology.org/2022.lrec-1.105.pdf)
60
+ - **Point of Contact:** [Carlos Mena](mailto:[email protected]), [Jón Guðnason](mailto:[email protected])
61
+
62
+ ### Dataset Summary
63
+
64
+ The Samrómur Children Corpus consists of audio recordings and metadata files containing prompts read by the participants. It contains more than 137000 validated speech-recordings uttered by Icelandic children.
65
+
66
+ The corpus is a result of the crowd-sourcing effort run by the Language and Voice Lab (LVL) at the Reykjavik University, in cooperation with Almannarómur, Center for Language Technology. The recording process has started in October 2019 and continues to this day (Spetember 2021).
67
+
68
+
69
+ ### Example Usage
70
+ The Samrómur Children Corpus is divided in 3 splits: train, validation and test. To load a specific split pass its name as a config name:
71
+ ```python
72
+ from datasets import load_dataset
73
+ samromur_children = load_dataset("language-and-voice-lab/samromur_children")
74
+ ```
75
+ To load an specific split (for example, the validation split) do:
76
+ ```python
77
+ from datasets import load_dataset
78
+ samromur_children = load_dataset("language-and-voice-lab/samromur_children",split="validation")
79
+ ```
80
+
81
+ ### Supported Tasks
82
+ automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
83
+
84
+ ### Languages
85
+ The audio is in Icelandic.
86
+ The reading prompts were gathered from a variety of sources, mainly from the [Icelandic Gigaword Corpus](http://clarin.is/en/resources/gigaword). The corpus includes text from novels, news, plays, and from a list of location names in Iceland. The prompts also came from the [Icelandic Web of Science](https://www.visindavefur.is/).
87
+
88
+ ## Dataset Structure
89
+
90
+ ### Data Instances
91
+ ```python
92
+ {
93
+ 'audio_id': '015652-0717240',
94
+ 'audio': {
95
+ 'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/2c6b0d82de2ef0dc0879732f726809cccbe6060664966099f43276e8c94b03f2/test/015652/015652-0717240.flac',
96
+ 'array': array([ 0. , 0. , 0. , ..., -0.00311279,
97
+ -0.0007019 , 0.00128174], dtype=float32),
98
+ 'sampling_rate': 16000
99
+ },
100
+ 'speaker_id': '015652',
101
+ 'gender': 'female',
102
+ 'age': '11',
103
+ 'duration': 4.179999828338623,
104
+ 'normalized_text': 'eiginlega var hann hin unga rússneska bylting lifandi komin'
105
+ }
106
+ ```
107
+
108
+ ### Data Fields
109
+ * `audio_id` (string) - id of audio segment
110
+ * `audio` (datasets.Audio) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio. In streaming mode, the path is the relative path of an audio inside its archive (as files are not downloaded and extracted locally).
111
+ * `speaker_id` (string) - id of speaker
112
+ * `gender` (string) - gender of speaker (male or female)
113
+ * `age` (string) - range of age of the speaker: Younger (15-35), Middle-aged (36-60) or Elderly (61+).
114
+ * `duration` (float32) - duration of the audio file in seconds.
115
+ * `normalized_text` (string) - normalized audio segment transcription.
116
+
117
+ ### Data Splits
118
+ The corpus is split into train, dev, and test portions. Lenghts of every portion are: train = 127h25m, test = 1h50m, dev=1h50m.
119
+
120
+ To load an specific portion please see the above section "Example Usage".
121
+
122
+ ## Dataset Creation
123
+
124
+ ### Curation Rationale
125
+
126
+ In the field of Automatic Speech Recognition (ASR) is a known fact that the children's speech is particularly hard to recognise due to its high variability produced by developmental changes in children's anatomy and speech production skills.
127
+
128
+ For this reason, the criteria of selection for the train/dev/test portions have to take into account the children's age. Nevertheless, the Samrómur Children is an unbalanced corpus in terms of gender and age of the speakers. This means that the corpus has, for example, a total of 1667 female speakers (73h38m) versus 1412 of male speakers (52h26m).
129
+
130
+ These unbalances impose conditions in the type of the experiments than can be performed with the corpus. For example, a equal number of female and male speakers through certain ranges of age is impossible. So, if one can't have a perfectly balance corpus in the training set, at least one can have it in the test portion.
131
+
132
+ The test portion of the Samrómur Children was meticulously selected to cover ages between 6 to 16 years in both female and male speakers. Every of these range of age in both genders have a total duration of 5 minutes each.
133
+
134
+ The development portion of the corpus contains only speakers with an unknown gender information. Both test and dev sets have a total duration of 1h50m each.
135
+
136
+ In order to perform fairer experiments, speakers in the train and test sets are not shared. Nevertheless, there is only one speaker shared between the train and development set. It can be identified with the speaker ID=010363. However, no audio files are shared between these two sets.
137
+
138
+ ### Source Data
139
+
140
+ #### Initial Data Collection and Normalization
141
+
142
+ The data was collected using the website https://samromur.is, code of which is available at https://github.com/cadia-lvl/samromur. The age range selected for this corpus is between 4 and 17 years.
143
+
144
+ The original audio was collected at 44.1 kHz or 48 kHz sampling rate as *.wav files, which was down-sampled to 16 kHz and converted to *.flac. Each recording contains one read sentence from a script. The script contains 85.080 unique sentences and 90.838 unique tokens.
145
+
146
+ There was no identifier other than the session ID, which is used as the speaker ID. The corpus is distributed with a metadata file with a detailed information on each utterance and speaker. The madata file is encoded as UTF-8 Unicode.
147
+
148
+ The prompts were gathered from a variety of sources, mainly from The Icelandic Gigaword Corpus, which is available at http://clarin.is/en/resources/gigaword. The corpus includes text from novels, news, plays, and from a list of location names in Iceland. The prompts also came from the [Icelandic Web of Science](https://www.visindavefur.is/).
149
+
150
+ ### Annotations
151
+
152
+ #### Annotation process
153
+
154
+ Prompts were pulled from these corpora if they met the criteria of having only letters which are present in the Icelandic alphabet, and if they are listed in the [DIM: Database Icelandic Morphology](https://aclanthology.org/W19-6116.pdf).
155
+
156
+ There are also synthesised prompts consisting of a name followed by a question or a demand, in order to simulate a dialogue with a smart-device.
157
+
158
+ #### Who are the annotators?
159
+ The audio files content was manually verified against the prompts by one or more listener (summer students mainly).
160
+
161
+ ### Personal and Sensitive Information
162
+ The dataset consists of people who have donated their voice. You agree to not attempt to determine the identity of speakers in this dataset.
163
+
164
+ ## Considerations for Using the Data
165
+
166
+ ### Social Impact of Dataset
167
+ This is the first ASR corpus of Icelandic children.
168
+
169
+ ### Discussion of Biases
170
+
171
+ * The utterances were recorded by a smartphone or the web app.
172
+
173
+ * Participants self-reported their age group, gender, and the native language.
174
+
175
+ * Participants are aged between 4 to 17 years.
176
+
177
+ * The corpus contains 137597 utterances from 3175 speakers, totalling 131 hours.
178
+
179
+ * The amount of data due to female speakers is 73h38m, the amount of data due to male speakers is 52h26m and the amount of data due to speakers with an unknown gender information is 05h02m
180
+
181
+ * The number of female speakers is 1667, the number of male speakers is 1412. The number of speakers with an unknown gender information is 96.
182
+
183
+ * The audios due to female speakers are 78993, the audios due to male speakers are 53927 and the audios due to speakers with an unknown gender information are 4677.
184
+
185
+ ### Other Known Limitations
186
+ "Samrómur Children: Icelandic Speech 21.09" by the Language and Voice Laboratory (LVL) at the Reykjavik University is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0) License with the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
187
+
188
+ ## Additional Information
189
+
190
+ ### Dataset Curators
191
+
192
+ The corpus is a result of the crowd-sourcing effort run by the Language and Voice Lab (LVL) at the Reykjavik University, in cooperation with Almannarómur, Center for Language Technology. The recording process has started in October 2019 and continues to this day (Spetember 2021). The corpus was curated by Carlos Daniel Hernández Mena in 2021.
193
+
194
+ ### Licensing Information
195
+ [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/)
196
+
197
+ ### Citation Information
198
+ ```
199
+ @misc{menasamromurchildren2021,
200
+ title={Samrómur Children Icelandic Speech 1.0},
201
+ ldc_catalog_no={LDC2022S11},
202
+ DOI={https://doi.org/10.35111/frrj-qd60},
203
+ author={Hernández Mena, Carlos Daniel and Borsky, Michal and Mollberg, David Erik and Guðmundsson, Smári Freyr and Hedström, Staffan and Pálsson, Ragnar and Jónsson, Ólafur Helgi and Þorsteinsdóttir, Sunneva and Guðmundsdóttir, Jóhanna Vigdís and Magnúsdóttir, Eydís Huld and Þórhallsdóttir, Ragnheiður and Guðnason, Jón},
204
+ publisher={Reykjavík University}
205
+ journal={Linguistic Data Consortium, Philadelphia},
206
+ year={2019},
207
+ url={https://catalog.ldc.upenn.edu/LDC2022S11},
208
+ }
209
+ ```
210
+
211
+ ### Contributions
212
+ This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.
213
+
214
+ The verification for the dataset was funded by the the Icelandic Directorate of Labour's Student Summer Job Program in 2020 and 2021.
215
+
216
+ Special thanks for the summer students for all the hard work.
217
+
corpus/files/metadata_dev.tsv ADDED
The diff for this file is too large to render. See raw diff
 
corpus/files/metadata_test.tsv ADDED
The diff for this file is too large to render. See raw diff
 
corpus/files/metadata_train.tsv ADDED
The diff for this file is too large to render. See raw diff
 
corpus/files/tars_dev.paths ADDED
@@ -0,0 +1 @@
 
 
1
+ corpus/speech/dev.tar.gz
corpus/files/tars_test.paths ADDED
@@ -0,0 +1 @@
 
 
1
+ corpus/speech/test.tar.gz
corpus/files/tars_train.paths ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ corpus/speech/train/train_part_01.tar.gz
2
+ corpus/speech/train/train_part_02.tar.gz
3
+ corpus/speech/train/train_part_03.tar.gz
corpus/speech/dev.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6b467138c28d0c279150e94e94e6380970d050a795ba436cd78ce2b89109fc9
3
+ size 133
corpus/speech/test.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9b5507ca75246c5feea36a0dd12b6db5af6ccd95881ef7469aebb86b3e8b674
3
+ size 133
corpus/speech/train/train_part_01.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:679837e7fe22f27bf138b98804e044e5694cf2674d2fbaf9eb625a032c2f0d97
3
+ size 135
corpus/speech/train/train_part_02.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43c6fef6091e9f2a221bd60a63628c2afad1c9b6eb226087af4f5ab309d91aa0
3
+ size 135
corpus/speech/train/train_part_03.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5618fad37d548b7698dd249f59faa30f2d439725af3d4e9a6652801247cb374a
3
+ size 135
customized_features.py ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import warnings
3
+ from io import BytesIO
4
+ from typing import Dict, Optional, Union
5
+
6
+ import datasets
7
+ import numpy as np
8
+
9
+
10
+ class customized_features(datasets.features.Audio):
11
+
12
+ def decode_example(self, value):
13
+ """Decode example audio file into audio data.
14
+
15
+ Args:
16
+ value: Audio file path.
17
+
18
+ Returns:
19
+ dict
20
+ """
21
+ # TODO: backard compatibility for users without audio dependencies
22
+ array, sampling_rate = (
23
+ self._decode_example_with_torchaudio(value)
24
+ if value.endswith(".mp3")
25
+ else self._decode_example_with_librosa(value)
26
+ )
27
+ return {"path": value, "array": array, "sampling_rate": sampling_rate}
28
+
29
+
30
+ def _decode_example_with_librosa(self, value):
31
+ try:
32
+ import librosa
33
+ except ImportError as err:
34
+ raise ImportError("To support decoding audio files, please install 'librosa'.") from err
35
+
36
+ try:
37
+ with open(value, "rb") as f:
38
+ array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
39
+ except Exception as e:
40
+ warnings.warn(f"Error while reading {value} using librosa: {e}")
41
+ array = np.empty(0)
42
+ sampling_rate = self.sampling_rate
43
+ return array, sampling_rate
44
+
45
+ def _decode_example_with_torchaudio(self, value):
46
+ try:
47
+ import torchaudio
48
+ import torchaudio.transforms as T
49
+ except ImportError as err:
50
+ raise ImportError("To support decoding 'mp3' audio files, please install 'torchaudio'.") from err
51
+ try:
52
+ torchaudio.set_audio_backend("sox_io")
53
+ except RuntimeError as err:
54
+ raise ImportError("To support decoding 'mp3' audio files, please install 'sox'.") from err
55
+
56
+ array, sampling_rate = torchaudio.load(value)
57
+ if self.sampling_rate and self.sampling_rate != sampling_rate:
58
+ if not hasattr(self, "_resampler"):
59
+ self._resampler = T.Resample(sampling_rate, self.sampling_rate)
60
+ array = self._resampler(array)
61
+ sampling_rate = self.sampling_rate
62
+ array = array.numpy()
63
+ if self.mono:
64
+ array = array.mean(axis=0)
65
+ return array, sampling_rate
66
+
67
+ def decode_batch(self, values):
68
+ decoded_batch = defaultdict(list)
69
+ for value in values:
70
+ decoded_example = self.decode_example(value)
71
+ for k, v in decoded_example.items():
72
+ decoded_batch[k].append(v)
73
+ return dict(decoded_batch)
samromur_children.py ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from collections import defaultdict
2
+ import os
3
+ import json
4
+ import csv
5
+
6
+ import datasets
7
+
8
+ import torchaudio
9
+ import warnings
10
+
11
+ _NAME="samromur_children"
12
+ _VERSION="1.0.0"
13
+ _AUDIO_EXTENSIONS=".flac"
14
+
15
+ _DESCRIPTION = """
16
+ The Samrómur Children corpus contains more than 137000 validated speech-recordings uttered by Icelandic children.
17
+ """
18
+
19
+ _CITATION = """
20
+ @misc{menasamromurchildren2022,
21
+ title={Samrómur Children Icelandic Speech 1.0},
22
+ ldc_catalog_no={LDC2022S11},
23
+ DOI={https://doi.org/10.35111/frrj-qd60},
24
+ author={Hernández Mena, Carlos Daniel and Borsky, Michal and Mollberg, David Erik and Guðmundsson, Smári Freyr and Hedström, Staffan and Pálsson, Ragnar and Jónsson, Ólafur Helgi and Þorsteinsdóttir, Sunneva and Guðmundsdóttir, Jóhanna Vigdís and Magnúsdóttir, Eydís Huld and Þórhallsdóttir, Ragnheiður and Guðnason, Jón},
25
+ publisher={Reykjavík University}
26
+ journal={Linguistic Data Consortium, Philadelphia},
27
+ year={2019},
28
+ url={https://catalog.ldc.upenn.edu/LDC2022S11},
29
+ }
30
+ """
31
+
32
+ _HOMEPAGE = "https://catalog.ldc.upenn.edu/LDC2022S11"
33
+
34
+ _LICENSE = "CC-BY-4.0, See https://creativecommons.org/licenses/by/4.0/"
35
+
36
+ _BASE_DATA_DIR = "corpus/"
37
+ _METADATA_TRAIN = os.path.join(_BASE_DATA_DIR,"files","metadata_train.tsv")
38
+ _METADATA_TEST = os.path.join(_BASE_DATA_DIR,"files", "metadata_test.tsv")
39
+ _METADATA_DEV = os.path.join(_BASE_DATA_DIR,"files", "metadata_dev.tsv")
40
+
41
+ _TARS_TRAIN = os.path.join(_BASE_DATA_DIR,"files","tars_train.paths")
42
+ _TARS_TEST = os.path.join(_BASE_DATA_DIR,"files", "tars_test.paths")
43
+ _TARS_DEV = os.path.join(_BASE_DATA_DIR,"files", "tars_dev.paths")
44
+
45
+ class SamromurChildrenConfig(datasets.BuilderConfig):
46
+ """BuilderConfig for Samromur Children"""
47
+
48
+ def __init__(self, name, **kwargs):
49
+ name=_NAME
50
+ super().__init__(name=name, **kwargs)
51
+
52
+ class SamromurChildren(datasets.GeneratorBasedBuilder):
53
+ """Samrómur Children Icelandic Speech 1.0"""
54
+
55
+ VERSION = datasets.Version(_VERSION)
56
+ BUILDER_CONFIGS = [
57
+ SamromurChildrenConfig(
58
+ name=_NAME,
59
+ version=datasets.Version(_VERSION),
60
+ )
61
+ ]
62
+
63
+ def _info(self):
64
+ features = datasets.Features(
65
+ {
66
+ "audio_id": datasets.Value("string"),
67
+ "audio": datasets.Audio(sampling_rate=16000),
68
+ "speaker_id": datasets.Value("string"),
69
+ "gender": datasets.Value("string"),
70
+ "age": datasets.Value("string"),
71
+ "duration": datasets.Value("float32"),
72
+ "normalized_text": datasets.Value("string"),
73
+ }
74
+ )
75
+ return datasets.DatasetInfo(
76
+ description=_DESCRIPTION,
77
+ features=features,
78
+ homepage=_HOMEPAGE,
79
+ license=_LICENSE,
80
+ citation=_CITATION,
81
+ )
82
+
83
+ def _split_generators(self, dl_manager):
84
+
85
+ metadata_train=dl_manager.download_and_extract(_METADATA_TRAIN)
86
+ metadata_test=dl_manager.download_and_extract(_METADATA_TEST)
87
+ metadata_dev=dl_manager.download_and_extract(_METADATA_DEV)
88
+
89
+ tars_train=dl_manager.download_and_extract(_TARS_TRAIN)
90
+ tars_test=dl_manager.download_and_extract(_TARS_TEST)
91
+ tars_dev=dl_manager.download_and_extract(_TARS_DEV)
92
+
93
+ hash_tar_files=defaultdict(dict)
94
+ with open(tars_train,'r') as f:
95
+ hash_tar_files['train']=[path.replace('\n','') for path in f]
96
+
97
+ with open(tars_test,'r') as f:
98
+ hash_tar_files['test']=[path.replace('\n','') for path in f]
99
+
100
+ with open(tars_dev,'r') as f:
101
+ hash_tar_files['dev']=[path.replace('\n','') for path in f]
102
+
103
+ hash_meta_paths={"train":metadata_train,"test":metadata_test,"dev":metadata_dev}
104
+ audio_paths = dl_manager.download(hash_tar_files)
105
+
106
+ splits=["train","dev","test"]
107
+ local_extracted_audio_paths = (
108
+ dl_manager.extract(audio_paths) if not dl_manager.is_streaming else
109
+ {
110
+ split:[None] * len(audio_paths[split]) for split in splits
111
+ }
112
+ )
113
+
114
+ return [
115
+ datasets.SplitGenerator(
116
+ name=datasets.Split.TRAIN,
117
+ gen_kwargs={
118
+ "audio_archives":[dl_manager.iter_archive(archive) for archive in audio_paths["train"]],
119
+ "local_extracted_archives_paths": local_extracted_audio_paths["train"],
120
+ "metadata_paths": hash_meta_paths["train"],
121
+ }
122
+ ),
123
+ datasets.SplitGenerator(
124
+ name=datasets.Split.VALIDATION,
125
+ gen_kwargs={
126
+ "audio_archives": [dl_manager.iter_archive(archive) for archive in audio_paths["dev"]],
127
+ "local_extracted_archives_paths": local_extracted_audio_paths["dev"],
128
+ "metadata_paths": hash_meta_paths["dev"],
129
+ }
130
+ ),
131
+ datasets.SplitGenerator(
132
+ name=datasets.Split.TEST,
133
+ gen_kwargs={
134
+ "audio_archives": [dl_manager.iter_archive(archive) for archive in audio_paths["test"]],
135
+ "local_extracted_archives_paths": local_extracted_audio_paths["test"],
136
+ "metadata_paths": hash_meta_paths["test"],
137
+ }
138
+ ),
139
+ ]
140
+
141
+ def _generate_examples(self, audio_archives, local_extracted_archives_paths, metadata_paths):
142
+
143
+ features = ["speaker_id","gender","age","duration","normalized_text"]
144
+
145
+ with open(metadata_paths) as f:
146
+ metadata = {x["audio_id"]: x for x in csv.DictReader(f, delimiter="\t")}
147
+
148
+ for audio_archive, local_extracted_archive_path in zip(audio_archives, local_extracted_archives_paths):
149
+ for audio_filename, audio_file in audio_archive:
150
+ #audio_id = audio_filename.split(os.sep)[-1].split(_AUDIO_EXTENSIONS)[0]
151
+ audio_id =os.path.splitext(os.path.basename(audio_filename))[0]
152
+ path = os.path.join(local_extracted_archive_path, audio_filename) if local_extracted_archive_path else audio_filename
153
+
154
+ # Load the audio file using torchaudio
155
+ waveform, sample_rate = torchaudio.load(path)
156
+
157
+ # Check if the waveform is empty
158
+ if waveform.numel() == 0:
159
+ warnings.warn(f"Empty audio file: {str(audio_id)}")
160
+ continue
161
+
162
+ yield audio_id, {
163
+ "audio_id": audio_id,
164
+ **{feature: metadata[audio_id][feature] for feature in features},
165
+ "audio": {"path": path, "bytes": audio_file.read()},
166
+ }