Automatic Speech Recognition
Transformers
Safetensors
Japanese
whisper
audio
hf-asr-leaderboard
Eval Results
Inference Endpoints
asahi417 commited on
Commit
7eb5752
1 Parent(s): 8ca7709

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -221,7 +221,6 @@ pipe = pipeline(
221
  torch_dtype=torch_dtype,
222
  device=device,
223
  model_kwargs=model_kwargs,
224
- chunk_length_s=15,
225
  batch_size=16
226
  )
227
 
@@ -230,7 +229,7 @@ dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test")
230
  sample = {"array": np.concatenate([i["array"] for i in dataset[:20]["audio"]]), "sampling_rate": dataset[0]['audio']['sampling_rate']}
231
 
232
  # run inference
233
- result = pipe(sample, generate_kwargs=generate_kwargs)
234
  print(result["text"])
235
  ```
236
 
 
221
  torch_dtype=torch_dtype,
222
  device=device,
223
  model_kwargs=model_kwargs,
 
224
  batch_size=16
225
  )
226
 
 
229
  sample = {"array": np.concatenate([i["array"] for i in dataset[:20]["audio"]]), "sampling_rate": dataset[0]['audio']['sampling_rate']}
230
 
231
  # run inference
232
+ result = pipe(sample, chunk_length_s=15, generate_kwargs=generate_kwargs)
233
  print(result["text"])
234
  ```
235