csukuangfj
commited on
Commit
•
3e1c130
1
Parent(s):
b290754
add vad asr for korean
Browse files- generate-vad-asr.py +6 -0
generate-vad-asr.py
CHANGED
@@ -120,6 +120,12 @@ see https://www.tablesgenerator.com/html_tables#
|
|
120 |
<td class="tg-0pky"><a href="https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/silero_vad.onnx">silero_vad.onnx</a></td>
|
121 |
<td class="tg-0pky"><a href="https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-thai-2024-06-20.tar.bz2">sherpa-onnx-zipformer-thai-2024-06-20.tar.bz2</a></td>
|
122 |
</tr>
|
|
|
|
|
|
|
|
|
|
|
|
|
123 |
<tr>
|
124 |
<td class="tg-0pky">sherpa-onnx-x.y.z-arm64-v8a-vad_asr-be_de_en_es_fr_hr_it_pl_ru_uk-fast_conformer_ctc_20k.apk</td>
|
125 |
<td class="tg-0lax">It supports <span style="color:red;">10 languages</span>: Belarusian, German, English, Spanish, French, Croatian, Italian, Polish, Russian, and Ukrainian. It is converted from <a href="https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_multilingual_fastconformer_hybrid_large_pc">STT Multilingual FastConformer Hybrid Transducer-CTC Large P&C</a> from <a href="https://github.com/NVIDIA/NeMo/">NVIDIA/NeMo</a>. Note that only the CTC branch is used. It is trained on ~20000 hours of data.</td>
|
|
|
120 |
<td class="tg-0pky"><a href="https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/silero_vad.onnx">silero_vad.onnx</a></td>
|
121 |
<td class="tg-0pky"><a href="https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-thai-2024-06-20.tar.bz2">sherpa-onnx-zipformer-thai-2024-06-20.tar.bz2</a></td>
|
122 |
</tr>
|
123 |
+
<tr>
|
124 |
+
<td class="tg-0pky">sherpa-onnx-x.y.z-arm64-v8a-vad_asr-ko-zipformer.apk</td>
|
125 |
+
<td class="tg-0lax">It supports only Korean. It is converted from <a href="https://huggingface.co/johnBamma/icefall-asr-ksponspeech-zipformer-2024-06-24">https://huggingface.co/johnBamma/icefall-asr-ksponspeech-zipformer-2024-06-24</a></td>
|
126 |
+
<td class="tg-0pky"><a href="https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/silero_vad.onnx">silero_vad.onnx</a></td>
|
127 |
+
<td class="tg-0pky"><a href="https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-zipformer-korean-2024-06-24.tar.bz2">sherpa-onnx-zipformer-korean-2024-06-24.tar.bz2</a></td>
|
128 |
+
</tr>
|
129 |
<tr>
|
130 |
<td class="tg-0pky">sherpa-onnx-x.y.z-arm64-v8a-vad_asr-be_de_en_es_fr_hr_it_pl_ru_uk-fast_conformer_ctc_20k.apk</td>
|
131 |
<td class="tg-0lax">It supports <span style="color:red;">10 languages</span>: Belarusian, German, English, Spanish, French, Croatian, Italian, Polish, Russian, and Ukrainian. It is converted from <a href="https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/stt_multilingual_fastconformer_hybrid_large_pc">STT Multilingual FastConformer Hybrid Transducer-CTC Large P&C</a> from <a href="https://github.com/NVIDIA/NeMo/">NVIDIA/NeMo</a>. Note that only the CTC branch is used. It is trained on ~20000 hours of data.</td>
|