Commit
•
20dea74
1
Parent(s):
f72311b
tpu v5e
Browse files
app.py
CHANGED
@@ -26,14 +26,14 @@ YT_LENGTH_LIMIT_S = 7200 # limit to 2 hour YouTube files
|
|
26 |
|
27 |
title = "Whisper JAX: The Fastest Whisper API ⚡️"
|
28 |
|
29 |
-
description = """Whisper JAX is an optimised implementation of the [Whisper model](https://huggingface.co/openai/whisper-large-v3) by OpenAI.
|
30 |
|
31 |
Note that at peak times, you may find yourself in the queue for this demo. When you submit a request, your queue position will be shown in the top right-hand side of the demo pane. Once you reach the front of the queue, your audio file will be transcribed, with the progress displayed through a progress bar.
|
32 |
|
33 |
-
To skip the queue, you may wish to create your own inference endpoint, details for which can be found in the [Whisper JAX repository](https://github.com/sanchit-gandhi/whisper-jax#creating-an-endpoint).
|
34 |
"""
|
35 |
|
36 |
-
article = "Whisper large-v3 model by OpenAI. Backend running JAX on a TPU
|
37 |
|
38 |
language_names = sorted(TO_LANGUAGE_CODE.keys())
|
39 |
|
|
|
26 |
|
27 |
title = "Whisper JAX: The Fastest Whisper API ⚡️"
|
28 |
|
29 |
+
description = """Whisper JAX is an optimised implementation of the [Whisper model](https://huggingface.co/openai/whisper-large-v3) by OpenAI. This demo is running on JAX with a TPU v5e backend. Compared to PyTorch on an A100 GPU, it is over [**70x faster**](https://github.com/sanchit-gandhi/whisper-jax#benchmarks), making it the fastest Whisper API available.
|
30 |
|
31 |
Note that at peak times, you may find yourself in the queue for this demo. When you submit a request, your queue position will be shown in the top right-hand side of the demo pane. Once you reach the front of the queue, your audio file will be transcribed, with the progress displayed through a progress bar.
|
32 |
|
33 |
+
To skip the queue, you may wish to create your own inference endpoint by duplicating the demo, details for which can be found in the [Whisper JAX repository](https://github.com/sanchit-gandhi/whisper-jax#creating-an-endpoint).
|
34 |
"""
|
35 |
|
36 |
+
article = "Whisper large-v3 model by OpenAI. Backend running JAX on a TPU v5e directly through Hugging Face Spaces. Whisper JAX [code](https://github.com/sanchit-gandhi/whisper-jax) and Gradio demo by 🤗 Hugging Face."
|
37 |
|
38 |
language_names = sorted(TO_LANGUAGE_CODE.keys())
|
39 |
|