File size: 9,250 Bytes
8008549
 
 
 
 
 
 
 
 
 
 
5e8294c
8008549
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2ec8e92
8008549
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2ec8e92
8008549
2ec8e92
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8008549
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
---
license: other
license_name: singapore-open-data-license
license_link: https://data.gov.sg/open-data-licence
task_categories:
- text-generation
- text-classification
- automatic-speech-recognition
- audio-classification
language:
- en
pretty_name: The Reprocessed Singapore National Speech Corpus
---

# Dataset Card for Reprocessed National Speech Corpus

*NOTE: This is an Reprocessed version KaraKaraWitch from Recursal.  
The official download can be found [here.](https://www.imda.gov.sg/how-we-can-help/national-speech-corpus)*

## Dataset Details

### Dataset Description

Dataset Description:

The National Speech Corpus (NSC) is the first large-scale Singapore English corpus, sponsored by the Info-communications and Media Development Authority (IMDA) of Singapore. The objective is to serve as a primary resource of open speech data for automatic speech recognition (ASR) research and other applications related to speech processing.

Please note that this is an **reprocessed version** of the original corpus available [here](https://www.imda.gov.sg/how-we-can-help/national-speech-corpus).

- Curated by: [IMDA Singapore](https://www.imda.gov.sg/how-we-can-help/national-speech-corpus)
- Language: Singaporean English
- License: [Singapore Open Data License](https://beta.data.gov.sg/open-data-license)
- Reprocessed Version: KaraKaraWitch (recursai.ai)

This version contains some differences compared to the original because of the unusual file formats, which were hard to process. We modified the formatting and content to make it easier for use in our dataset and for public Huggingface use. For details on the changes and a list of original files and their modified counterparts, please see the documentation in the `Docs` folder.

Strike through content denotes the deleted or reformatted information for enhanced usability. All original data has been preserved and made available in the updated formats.

## Changes

- Converted docx documentation into markdown text for ease of sharing. The conversation was done with verbatim in mind. Minimizing as much changes as possible.
- Converted XLSX tables into utf-8 csv (The BOM is included due to excel's csv saving).
- All files are in split tar files in standard json / jsonl files + FLAC Compressed audio.
- We have not modified any text transcripts, audio files were originally saved in .wav which we have losslessly compressed to flac.

## Uses

The National Speech Corpus (NSC) is a large-scale Singapore English corpus aimed to improve speech engines' accuracy of recognition and transcription for locally accented English, supporting innovative digital solutions and driving progress in Singapore's digital landscape.

### Direct Use

The NSC can be used to improve Automatic Speech Recognition (ASR) research and speech-related applications, such as telco call centres to transcribe calls for auditing and sentiment analysis purposes, and chatbots that can accurately support the Singaporean accent.

### Out-of-Scope Use

The NSC is not intended for speech synthesis technology, but it can contribute to producing an AI voice that is more familiar to Singaporeans, with local terms pronounced more accurately.

## Dataset Structure

The entire NSC is approximately 1.2 TB (~800GB with FLAC Compression and json metadata) in size, consisting of 6 parts.

**Due to HF file restrictions**, we have chunked the file with `split` command.
As such you will need to recombine the files with `cat` or similar methods.

Using `DifferentRooms.tar` from `Part 3` as an Example:  
`cat "DifferentRooms-00.tar" "DifferentRooms-01.tar" [...] "DifferentRooms-06.tar" "DifferentRooms-07.tar" > "DifferentRooms.tar"`

After concating / combining the files, refer to the `Usage Example` on how to use it.

## Usage Example

The dataset can be loaded with webdataset.

```py
import webdataset as wds
# After concatting, you may use the file like a regular dataset.

# The dataset is compatible with WebDataset format. Example...

# filepath to Part 1/Channel_0.tar
FILEPATH = "Channel_0.tar"

hf_dataset = wds.WebDataset(FILEPATH).shuffle(1000).to_tuple("json", "flac")

for i in hf_dataset:
    # print(i)
    # Prints something like this:
    # i = {
    #     "__key__": "SP0402-CH00-SE00-RC023",
    #     "__url__": FILEPATH,
    #     "json": b'{"SpeakerID":402,"ChannelID":0,"SessionID":0,"RecordingID":23,"original_text":"I felt happy upon hearing the good news from my parents.","read_text":"I felt happy upon hearing the good news from my parents"}',
    #     "flac": b"",
    # }
    break
```

If you need to use HF datasets, load it like so:

```py
from datasets import load_dataset

# The tar files are split due to HF limits. You will need to combine them first.
# You may use the following 
# `cat "DifferentRooms-00.tar" "DifferentRooms-01.tar" ... "DifferentRooms-06.tar" "DifferentRooms-07.tar" > "DifferentRooms.tar"`
# After concatting, you may use the file like a regular dataset.

FILEPATH = "Channel_0.tar"

hf_dataset = load_dataset("webdataset", data_files={"train": FILEPATH}, split="train", streaming=True)

# NOTE: You will need to install 'librosa' and 'soundfile' to decode the flac file.

for i in hf_dataset:
    print(i)
    # Prints something like this:
    # {
    #     "__key__": "SP0402-CH00-SE00-RC001",
    #     "__url__": FILEPATH,
    #     "json": {
    #         "ChannelID": 0,
    #         "RecordingID": 1,
    #         "SessionID": 0,
    #         "SpeakerID": 402,
    #         "original_text": "Mary and her family were moving to another city.",
    #         "read_text": "Mary and her family were moving to another city",
    #     },
    #     "flac": {
    #         "path": "SP0402-CH00-SE00-RC001.flac",
    #         "array": array(
    #             [
    #                 0.00000000e00,
    #                 6.10351562e-05,
    #                 1.52587891e-04,
    #                 ...,
    #                 -2.44140625e-04,
    #                 -2.44140625e-04,
    #                 -1.83105469e-04,
    #             ]
    #         ),
    #         "sampling_rate": 16000,
    #     },
    # }
    break
```

## Other Notes

- Downloading from dropbox is really not optimal. I've sent a email to `[email protected]` but they didn't respond to me. 😔
  - Managed to get a response from them. Though it is around 1 month and I've already downloaded all the files from the dropbox link.
- The scripts has numerous text issues. Notably missing quotes and incorrect encoding which made it a absolute headache to process.
  - I'll probably release the tools at a later time.

## BibTeX Citation

```tex
@ONLINE{reprocessed_nationalspeechcorpus,
  title         = {The Reprocessed National Speech Corpus},
  author        = {KaraKaraWitch},
  year          = {2024},
  howpublished  = {\url{https://huggingface.co/datasets/recursal/reprocessed_national_speech_corpus}},
}

@ONLINE{imda_nationalspeechcorpus,
  title         = {IMDA National Speech Corpus},
  author        = {Infocomm Media Development Authority},
  year          = {2024},
  howpublished  = {\url{https://www.imda.gov.sg/how-we-can-help/national-speech-corpus}},
}
```

## Glossary

There is a compiled markdown containing documents and notes various notes.

## Recursal's Vision

> To make AI accessible to everyone, regardless of language, or economical status

This is the collective goal of the `RWKV Open Source foundation` and `Recursal AI`, the commercial entity who backs it.

We believe that AI should not be controlled by a select few individual organization. And that it should be made accessible regardless if you are rich or poor, or a native speaker of english.

### About RWKV

RWKV is an Open Source, non profit group, under the linux foundation. Focused on developing the RWKV AI architecture, in accordence to our vision.

The RWKV architecture scales efficiently and economically. As an RNN & Transformer hybrid, it is able to provide the performance similar to leading transformer models, while having the compute and energy efficiency of an RNN based architecture.

You can find out more about the project, and latest models, at the following

- [https://blog.rwkv.com](https://blog.rwkv.com)
- [https://wiki.rwkv.com](https://wiki.rwkv.com)


### About Recursal AI

Recursal AI, is the commercial entity built to provide support for RWKV model development and users, while providing commercial services via its public cloud, or private-cloud / on-premise offerings.

As part of our vision. Our commitment, is to ensure open source development and access to the best foundational AI models and datasets. 

The following dataset/models provided here, is part of that commitment.

You can find out more about recursal AI here

- [https://recursal.ai](https://recursal.ai)
- [https://blog.recursal.ai](https://blog.recursal.ai)

## Dataset Card Contact

For issues regarding this reprocessed dataset, you may use the **community discussions** thread. In regards to licensing, please refer to *Singapore Open Data License* listed above. **We will close any issues regard licensing** due to abuse as of late due to it.

For further enquiries on the **original** National Speech Corpus (raw data), please contact **<[email protected]>**.