Spaces:
Runtime error
rec
this project can load all kinds of whiper models on hugginface, like openai/whisper-medium, distil-whisper/distil-large-v3,
current runing on cpu , cost 20s on 4 core cpus to transcribe 10s audio . people can use it to load many whiper model to compare speed
the project is based on whisper-v3-zero, and modified by claude3 opus AI totally, hope it can running on t4 gpu or gpu zero. i will add more function later, like transcript history, send result to LLM get fast voice chat
its very slow on hg free 2 core cpus even with whisper-tiny
Transcription Time: 74.03 seconds
Model Used: openai/whisper-tiny
Device Used: CPU
this space now for showing whipser on cpu, it neednot t4 gpu now
thanks hg, now i can use gpu zero, it is insane fast, the zero space url: https://huggingface.co/spaces/devilent2/whisper-v3-zero
for a 24s audio:
Transcription Time: 0.79 seconds
Model Used: distil-whisper/distil-large-v3
Device Used: GPU