jbetker commited on
Commit
a064290
β€’
1 Parent(s): 84d641c

Update documentation, add optional verbosity

Browse files
Files changed (5) hide show
  1. README.md +151 -46
  2. api.py +58 -7
  3. tortoise_tts.ipynb +308 -161
  4. tortoise_v2_examples.html +0 -0
  5. utils/diffusion.py +2 -2
README.md CHANGED
@@ -1,77 +1,182 @@
1
- # Tortoise-TTS
2
 
3
- Tortoise TTS is an experimental text-to-speech program that uses recent machine learning techniques to generate
4
- high-quality speech samples.
 
 
5
 
6
  This repo contains all the code needed to run Tortoise TTS in inference mode.
7
 
8
  ## What's in a name?
9
 
10
  I'm naming my speech-related repos after Mojave desert flora and fauna. Tortoise is a bit tongue in cheek: this model
11
- is insanely slow. It leverages both an autoregressive speech alignment model and a diffusion model, both of which
12
- are known for their slow inference. It also performs CLIP sampling, which slows things down even further. You can
13
- expect ~5 seconds of speech to take ~30 seconds to produce on the latest hardware. Still, the results are pretty cool.
14
-
15
- ## What the heck is this?
16
 
17
- Tortoise TTS is inspired by OpenAI's DALLE, applied to speech data. It is made up of 4 separate models that work together.
18
- These models are all derived from different repositories which are all linked. All the models have been modified
19
- for this use case (some substantially so).
20
 
21
- First, an autoregressive transformer stack predicts discrete speech "tokens" given a text prompt. This model is very
22
- similar to the GPT model used by DALLE, except it operates on speech data.
23
- Based on: [GPT2 from Transformers](https://huggingface.co/docs/transformers/model_doc/gpt2)
24
 
25
- Next, a CLIP model judges a batch of outputs from the autoregressive transformer against the provided text and stack
26
- ranks the outputs according to most probable. You could use greedy or beam-search decoding but in my experience CLIP
27
- decoding creates considerably better results.
28
- Based on [CLIP from lucidrains](https://github.com/lucidrains/DALLE-pytorch/blob/main/dalle_pytorch/dalle_pytorch.py)
29
 
30
- Next, the speech "tokens" are decoded into a low-quality MEL spectrogram using a VQVAE.
31
- Based on [VQVAE2 by rosinality](https://github.com/rosinality/vq-vae-2-pytorch)
32
 
33
- Finally, the output of the VQVAE is further decoded by a UNet diffusion model into raw audio, which can be placed in
34
- a wav file.
35
- Based on [ImprovedDiffusion by openai](https://github.com/openai/improved-diffusion)
36
 
37
- ## How do I use this?
38
 
39
- Check out the colab: https://colab.research.google.com/drive/1wVVqUPqwiDBUVeWWOUNglpGhU3hg_cbR?usp=sharing
40
 
41
- Or on a computer with a GPU (with >=16GB of VRAM):
42
  ```shell
43
  git clone https://github.com/neonbjb/tortoise-tts.git
44
  cd tortoise-tts
45
  pip install -r requirements.txt
46
- python do_tts.py
47
  ```
48
 
49
- ## Hand-picked TTS samples
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
- I generated ~250 samples from 23 text prompts and 8 voices. The text prompts have never been seen by the model. The
52
- voices were pulled from the training set.
53
 
54
- All of the samples can be found in the results/ folder of this repo. I handpicked a few to show what the model is capable of:
 
55
 
56
- - [Atkins - Road not taken](results/favorites/atkins_road_not_taken.wav)
57
- - [Dotrice - Rolling Stone interview](results/favorites/dotrice_rollingstone.wav)
58
- - [Dotrice - 'Ornaments' from tacotron test set](results/favorites/dotrice_tacotron_samp1.wav)
59
- - [Kennard - 'Acute emotional intelligence' from tacotron test set](results/favorites/kennard_tacotron_samp2.wav)
60
- - [Mol - Because I could not stop for death](results/favorites/mol_dickenson.wav)
61
- - [Mol - Obama](results/favorites/mol_obama.wav)
62
 
63
- Prosody is remarkably good for poetry, despite the fact that it was never trained on poetry.
64
 
65
- ## How do I train this?
66
 
67
- Frankly - you don't. Building this model has been a labor of love for me, consuming most of my 6 RTX3090s worth of
68
- resources for the better part of 6 months. It uses a dataset I've gathered, refined and transcribed that consists of
69
- a lot of audio data which I cannot distribute because of copywrite or no open licenses.
 
 
70
 
71
- With that said, I'm willing to help you out if you really want to give it a shot. DM me.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
  ## Looking forward
74
 
75
- I'm not satisfied with this yet. Treat this as a "sneak peek" and check back in a couple of months. I think the concept
76
- is sound, but there are a few hurdles to overcome to get sample quality up. I have been doing major tweaks to the
77
- diffusion model and should have something new and much better soon.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TorToiSe
2
 
3
+ Tortoise is a text-to-speech program built with the following priorities:
4
+
5
+ 1. Strong multi-voice capabilities.
6
+ 2. Highly realistic prosody and intonation.
7
 
8
  This repo contains all the code needed to run Tortoise TTS in inference mode.
9
 
10
  ## What's in a name?
11
 
12
  I'm naming my speech-related repos after Mojave desert flora and fauna. Tortoise is a bit tongue in cheek: this model
13
+ is insanely slow. It leverages both an autoregressive decoder **and** a diffusion decoder; both known for their low
14
+ sampling rates. On a K80, expect to generate a medium sized sentence every 2 minutes.
 
 
 
15
 
16
+ ## Demos
 
 
17
 
18
+ See [this page](http://nonint.com/static/tortoise_v2_examples.html) for a large list of example outputs.
 
 
19
 
20
+ ## Usage guide
 
 
 
21
 
22
+ ### Colab
 
23
 
24
+ Colab is the easiest way to try this out. I've put together a notebook you can use here:
25
+ https://colab.research.google.com/drive/1wVVqUPqwiDBUVeWWOUNglpGhU3hg_cbR?usp=sharing
 
26
 
27
+ ### Installation
28
 
29
+ If you want to use this on your own computer, you must have an NVIDIA GPU. Installation:
30
 
 
31
  ```shell
32
  git clone https://github.com/neonbjb/tortoise-tts.git
33
  cd tortoise-tts
34
  pip install -r requirements.txt
 
35
  ```
36
 
37
+ ### do_tts.py
38
+
39
+ This script allows you to speak a single phrase with one or more voices.
40
+ ```shell
41
+ python do_tts.py --text "I'm going to speak this" --voice dotrice --preset fast
42
+ ```
43
+
44
+ ### read.py
45
+
46
+ This script provides tools for reading large amounts of text.
47
+ ```shell
48
+ python read.py --textfile <your text to be read> --voice dotrice
49
+ ```
50
+
51
+ ### API
52
+
53
+ Tortoise can be used programmatically, like so:
54
+
55
+ ```python
56
+ reference_clips = [utils.audio.load_audio(p, 22050) for p in clips_paths]
57
+ tts = api.TextToSpeech()
58
+ pcm_audio = tts.tts_with_preset("your text here", reference_clips, preset='fast')
59
+ ```
60
+
61
+ ## Voice customization guide
62
+
63
+ Tortoise was specifically trained to be a multi-speaker model. It accomplishes this by consulting reference clips.
64
+
65
+ These reference clips are recordings of a speaker that you provide to guide speech generation. These clips are used to determine many properties of the output, such as the pitch and tone of the voice, speaking speed, and even speaking defects like a lisp or stuttering. The reference clip is also used to determine non-voice related aspects of the audio output like volume, background noise, recording quality and reverb.
66
+
67
+ ### Provided voices
68
+
69
+ This repo comes with several pre-packaged voices. You will be familiar with many of them. :)
70
+
71
+ Most of the provided voices were not found in the training set. Experimentally, it seems that voices from the training set
72
+ produce more realistic outputs then those outside of the training set. The following voices come from the training set:
73
+ atkins, dotrice, grace, harris, kennard, lescault, mol, otto.
74
+
75
+ ### Adding a new voice
76
+
77
+ To add new voices to Tortoise, you will need to do the following:
78
+
79
+ 1. Gather audio clips of your speaker(s). Good sources are YouTube interviews (you can use youtube-dl to fetch the audio), audiobooks or podcasts. Guidelines for good clips are in the next section.
80
+ 2. Cut your clips into ~10 second segments. You want at least 3 clips. More is better, but I only experimented with up to 5 in my testing.
81
+ 3. Save the clips as a WAV file with floating point format and a 22,050 sample rate.
82
+ 4. Create a subdirectory in voices/
83
+ 5. Put your clips in that subdirectory.
84
+ 6. Run tortoise utilities with --voice=<your_subdirectory_name>.
85
 
86
+ ### Picking good reference clips
 
87
 
88
+ As mentioned above, your reference clips have a profound impact on the output of Tortoise. Following are some tips for picking
89
+ good clips:
90
 
91
+ 1. Avoid clips with background music, noise or reverb. These clips were removed from the training dataset. Tortoise is unlikely to do well with them.
92
+ 2. Avoid speeches. These generally have distortion caused by the amplification system.
93
+ 3. Avoid clips from phone calls.
94
+ 4. Avoid clips that have excessive stuttering, stammering or words like "uh" or "like" in them.
95
+ 5. Try to find clips that are spoken in such a way as you wish your output to sound like. For example, if you want to hear your target voice read an audiobook, try to find clips of them reading a book.
96
+ 6. The text being spoken in the clips does not matter, but diverse text does seem to perform better.
97
 
98
+ ## Advanced Usage
99
 
100
+ ### Generation settings
101
 
102
+ Tortoise is primarily an autoregressive decoder model combined with a diffusion model. Both of these have a lot of knobs
103
+ that can be turned that I've abstracted away for the sake of ease of use. I did this by generating thousands of clips using
104
+ various permutations of the settings and using a metric for voice realism and intelligibility to measure their effects. I've
105
+ set the defaults to the best overall settings I was able to find. For specific use-cases, it might be effective to play with
106
+ these settings (and it's very likely that I missed something!)
107
 
108
+ These settings are not available in the normal scripts packaged with Tortoise. They are available, however, in the API. See
109
+ ```api.tts``` for a full list.
110
+
111
+ ### Playing with the voice latent
112
+
113
+ Tortoise ingests reference clips by feeding them through individually through a small submodel that produces a point latent, then taking the mean of all of the produced latents. The experimentation I have done has indicated that these point latents are quite expressive, affecting
114
+ everything from tone to speaking rate to speech abnormalities.
115
+
116
+ This lends itself to some neat tricks. For example, you can combine feed two different voices to tortoise and it will output what it thinks the "average" of those two voices sounds like. You could also theoretically build a small extension to Tortoise that gradually shifts the
117
+ latent from one speaker to another, then apply it across a bit of spoken text (something I havent implemented yet, but might
118
+ get to soon!) I am sure there are other interesting things that can be done here. Please let me know what you find!
119
+
120
+ ### Send me feedback!
121
+
122
+ Probabilistic models like Tortoise are best thought of as an "augmented search" - in this case, through the space of possible
123
+ utterances of a specific string of text. The impact of community involvement in perusing these spaces (such as is being done with
124
+ GPT-3 or CLIP) has really surprised me. If you find something neat that you can do with Tortoise that isn't documented here,
125
+ please report it to me! I would be glad to publish it to this page.
126
+
127
+ ## Model architecture
128
+
129
+ Tortoise TTS is inspired by OpenAI's DALLE, applied to speech data and using a better decoder. It is made up of 5 separate
130
+ models that work together. I've assembled a write-up of the system architecture here:
131
+ [https://nonint.com/2022/04/25/tortoise-architectural-design-doc/](https://nonint.com/2022/04/25/tortoise-architectural-design-doc/)
132
+
133
+ ## Training
134
+
135
+ These models were trained on my "homelab" server with 8 RTX 3090s over the course of several months. They were trained on a dataset consisting of
136
+ ~50k hours of speech data, most of which was transcribed by [ocotillo](http://www.github.com/neonbjb/ocotillo). Training was done on my own
137
+ [DLAS](https://github.com/neonbjb/DL-Art-School) trainer.
138
+
139
+ I currently do not have plans to release the training configurations or methodology. See the next section..
140
+
141
+ ## Ethical Considerations
142
+
143
+ Tortoise v2 works considerably better than I had planned. When I began hearing some of the outputs of the last few versions, I began
144
+ wondering whether or not I had an ethically unsound project on my hands. The ways in which a voice-cloning text-to-speech system
145
+ could be misused are many. It doesn't take much creativity to think up how.
146
+
147
+ After consulting with friends and family, I have decided to go forward with releasing this. Following are the reasons for this choice:
148
+
149
+ 1. It is primarily good at reading books and speaking poetry. Other forms of speech do not work well.
150
+ 2. It was trained on a dataset which does not have the voices of public figures. While it will attempt to mimic these voices if they are provided as references, it does not do so in such a way that most humans would be fooled.
151
+ 3. The above points could likely be resolved by scaling up the model and the dataset. For this reason, I am currently withholding details on how I trained the model, pending community feedback.
152
+ 4. I am releasing a separate classifier model which will tell you whether a given audio clip was generated by Tortoise or not. See `tortoise-detect` above.
153
+ 5. If I, a tinkerer with a BS in computer science with a ~$15k computer can build this, then any motivated corporation or state can as well. I would prefer that it be in the open and everyone know the kinds of things ML can do.
154
+
155
+ ### Diversity
156
+
157
+ The diversity expressed by ML models is strongly tied to the datasets they were trained on.
158
+
159
+ Tortoise was trained primarily on a dataset consisting of audiobooks. I made no effort to
160
+ balance diversity in this dataset. For this reason, Tortoise will be particularly poor at generating the voices of minorities
161
+ or of people who speak with strong accents.
162
 
163
  ## Looking forward
164
 
165
+ Tortoise v2 is about as good as I think I can do in the TTS world with the resources I have access to. A phenomenon that happens when
166
+ training very large models is that as parameter count increases, the communication bandwidth needed to support distributed training
167
+ of the model increases multiplicatively. On enterprise-grade hardware, this is not an issue: GPUs are attached together with
168
+ exceptionally wide buses that can accommodate this bandwidth. I cannot afford enterprise hardware, though, so I am stuck.
169
+
170
+ I want to mention here
171
+ that I think Tortoise could do be a **lot** better. The three major components of Tortoise are either vanilla Transformer Encoder stacks
172
+ or Decoder stacks. Both of these types of models have a rich experimental history with scaling in the NLP realm. I see no reason
173
+ to believe that the same is not true of TTS.
174
+
175
+ The largest model in Tortoise v2 is considerably smaller than GPT-2 large. It is 20x smaller that the original DALLE transformer.
176
+ Imagine what a TTS model trained at or near GPT-3 or DALLE scale could achieve.
177
+
178
+ ## Notice
179
+
180
+ Tortoise was built entirely by me using my own hardware. My employer was not involved in any facet of Tortoise's development.
181
+
182
+ If you use this repo or the ideas therein for your research, please cite it! A bibtex entree can be found in the right pane on GitHub.
api.py CHANGED
@@ -119,7 +119,7 @@ def fix_autoregressive_output(codes, stop_token, complain=True):
119
  return codes
120
 
121
 
122
- def do_spectrogram_diffusion(diffusion_model, diffuser, latents, conditioning_samples, temperature=1):
123
  """
124
  Uses the specified diffusion model to convert discrete codes into a spectrogram.
125
  """
@@ -139,7 +139,8 @@ def do_spectrogram_diffusion(diffusion_model, diffuser, latents, conditioning_sa
139
 
140
  noise = torch.randn(output_shape, device=latents.device) * temperature
141
  mel = diffuser.p_sample_loop(diffusion_model, output_shape, noise=noise,
142
- model_kwargs={'precomputed_aligned_embeddings': precomputed_embeddings})
 
143
  return denormalize_tacotron_mel(mel)[:,:,:output_seq_len]
144
 
145
 
@@ -203,14 +204,59 @@ class TextToSpeech:
203
  kwargs.update(presets[preset])
204
  return self.tts(text, voice_samples, **kwargs)
205
 
206
- def tts(self, text, voice_samples, k=1,
207
  # autoregressive generation parameters follow
208
  num_autoregressive_samples=512, temperature=.8, length_penalty=1, repetition_penalty=2.0, top_p=.8, max_mel_tokens=500,
 
209
  # CLVP & CVVP parameters
210
  clvp_cvvp_slider=.5,
211
  # diffusion generation parameters follow
212
  diffusion_iterations=100, cond_free=True, cond_free_k=2, diffusion_temperature=1.0,
213
  **hf_generate_kwargs):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
214
  text = torch.IntTensor(self.tokenizer.encode(text)).unsqueeze(0).cuda()
215
  text = F.pad(text, (0, 1)) # This may not be necessary.
216
 
@@ -229,7 +275,9 @@ class TextToSpeech:
229
  stop_mel_token = self.autoregressive.stop_mel_token
230
  calm_token = 83 # This is the token for coding silence, which is fixed in place with "fix_autoregressive_output"
231
  self.autoregressive = self.autoregressive.cuda()
232
- for b in tqdm(range(num_batches)):
 
 
233
  codes = self.autoregressive.inference_speech(conds, text,
234
  do_sample=True,
235
  top_p=top_p,
@@ -247,7 +295,9 @@ class TextToSpeech:
247
  clip_results = []
248
  self.clvp = self.clvp.cuda()
249
  self.cvvp = self.cvvp.cuda()
250
- for batch in samples:
 
 
251
  for i in range(batch.shape[0]):
252
  batch[i] = fix_autoregressive_output(batch[i], stop_mel_token)
253
  clvp = self.clvp(text.repeat(batch.shape[0], 1), batch, return_loss=False)
@@ -272,7 +322,8 @@ class TextToSpeech:
272
  return_latent=True, clip_inputs=False)
273
  self.autoregressive = self.autoregressive.cpu()
274
 
275
- print("Performing vocoding..")
 
276
  wav_candidates = []
277
  self.diffusion = self.diffusion.cuda()
278
  self.vocoder = self.vocoder.cuda()
@@ -291,7 +342,7 @@ class TextToSpeech:
291
  latents = latents[:, :k]
292
  break
293
 
294
- mel = do_spectrogram_diffusion(self.diffusion, diffuser, latents, voice_samples, temperature=diffusion_temperature)
295
  wav = self.vocoder.inference(mel)
296
  wav_candidates.append(wav.cpu())
297
  self.diffusion = self.diffusion.cpu()
 
119
  return codes
120
 
121
 
122
+ def do_spectrogram_diffusion(diffusion_model, diffuser, latents, conditioning_samples, temperature=1, verbose=True):
123
  """
124
  Uses the specified diffusion model to convert discrete codes into a spectrogram.
125
  """
 
139
 
140
  noise = torch.randn(output_shape, device=latents.device) * temperature
141
  mel = diffuser.p_sample_loop(diffusion_model, output_shape, noise=noise,
142
+ model_kwargs={'precomputed_aligned_embeddings': precomputed_embeddings},
143
+ progress=verbose)
144
  return denormalize_tacotron_mel(mel)[:,:,:output_seq_len]
145
 
146
 
 
204
  kwargs.update(presets[preset])
205
  return self.tts(text, voice_samples, **kwargs)
206
 
207
+ def tts(self, text, voice_samples, k=1, verbose=True,
208
  # autoregressive generation parameters follow
209
  num_autoregressive_samples=512, temperature=.8, length_penalty=1, repetition_penalty=2.0, top_p=.8, max_mel_tokens=500,
210
+ typical_sampling=False, typical_mass=.9,
211
  # CLVP & CVVP parameters
212
  clvp_cvvp_slider=.5,
213
  # diffusion generation parameters follow
214
  diffusion_iterations=100, cond_free=True, cond_free_k=2, diffusion_temperature=1.0,
215
  **hf_generate_kwargs):
216
+ """
217
+ Produces an audio clip of the given text being spoken with the given reference voice.
218
+ :param text: Text to be spoken.
219
+ :param voice_samples: List of 2 or more ~10 second reference clips which should be torch tensors containing 22.05kHz waveform data.
220
+ :param k: The number of returned clips. The most likely (as determined by Tortoises' CLVP and CVVP models) clips are returned.
221
+ :param verbose: Whether or not to print log messages indicating the progress of creating a clip. Default=true.
222
+ ~~AUTOREGRESSIVE KNOBS~~
223
+ :param num_autoregressive_samples: Number of samples taken from the autoregressive model, all of which are filtered using CLVP+CVVP.
224
+ As Tortoise is a probabilistic model, more samples means a higher probability of creating something "great".
225
+ :param temperature: The softmax temperature of the autoregressive model.
226
+ :param length_penalty: A length penalty applied to the autoregressive decoder. Higher settings causes the model to produce more terse outputs.
227
+ :param repetition_penalty: A penalty that prevents the autoregressive decoder from repeating itself during decoding. Can be used to reduce the incidence
228
+ of long silences or "uhhhhhhs", etc.
229
+ :param top_p: P value used in nucleus sampling. (0,1]. Lower values mean the decoder produces more "likely" (aka boring) outputs.
230
+ :param max_mel_tokens: Restricts the output length. (0,600] integer. Each unit is 1/20 of a second.
231
+ :param typical_sampling: Turns typical sampling on or off. This sampling mode is discussed in this paper: https://arxiv.org/abs/2202.00666
232
+ I was interested in the premise, but the results were not as good as I was hoping. This is off by default, but
233
+ could use some tuning.
234
+ :param typical_mass: The typical_mass parameter from the typical_sampling algorithm.
235
+ ~~CLVP-CVVP KNOBS~~
236
+ :param clvp_cvvp_slider: Controls the influence of the CLVP and CVVP models in selecting the best output from the autoregressive model.
237
+ [0,1]. Values closer to 1 will cause Tortoise to emit clips that follow the text more. Values closer to
238
+ 0 will cause Tortoise to emit clips that more closely follow the reference clip (e.g. the voice sounds more
239
+ similar).
240
+ ~~DIFFUSION KNOBS~~
241
+ :param diffusion_iterations: Number of diffusion steps to perform. [0,4000]. More steps means the network has more chances to iteratively refine
242
+ the output, which should theoretically mean a higher quality output. Generally a value above 250 is not noticeably better,
243
+ however.
244
+ :param cond_free: Whether or not to perform conditioning-free diffusion. Conditioning-free diffusion performs two forward passes for
245
+ each diffusion step: one with the outputs of the autoregressive model and one with no conditioning priors. The output
246
+ of the two is blended according to the cond_free_k value below. Conditioning-free diffusion is the real deal, and
247
+ dramatically improves realism.
248
+ :param cond_free_k: Knob that determines how to balance the conditioning free signal with the conditioning-present signal. [0,inf].
249
+ As cond_free_k increases, the output becomes dominated by the conditioning-free signal.
250
+ Formula is: output=cond_present_output*(cond_free_k+1)-cond_absenct_output*cond_free_k
251
+ :param diffusion_temperature: Controls the variance of the noise fed into the diffusion model. [0,1]. Values at 0
252
+ are the "mean" prediction of the diffusion network and will sound bland and smeared.
253
+ ~~OTHER STUFF~~
254
+ :param hf_generate_kwargs: The huggingface Transformers generate API is used for the autoregressive transformer.
255
+ Extra keyword args fed to this function get forwarded directly to that API. Documentation
256
+ here: https://huggingface.co/docs/transformers/internal/generation_utils
257
+ :return: Generated audio clip(s) as a torch tensor. Shape 1,S if k=1 else, (k,1,S) where S is the sample length.
258
+ Sample rate is 24kHz.
259
+ """
260
  text = torch.IntTensor(self.tokenizer.encode(text)).unsqueeze(0).cuda()
261
  text = F.pad(text, (0, 1)) # This may not be necessary.
262
 
 
275
  stop_mel_token = self.autoregressive.stop_mel_token
276
  calm_token = 83 # This is the token for coding silence, which is fixed in place with "fix_autoregressive_output"
277
  self.autoregressive = self.autoregressive.cuda()
278
+ if verbose:
279
+ print("Generating autoregressive samples..")
280
+ for b in tqdm(range(num_batches), disable=not verbose):
281
  codes = self.autoregressive.inference_speech(conds, text,
282
  do_sample=True,
283
  top_p=top_p,
 
295
  clip_results = []
296
  self.clvp = self.clvp.cuda()
297
  self.cvvp = self.cvvp.cuda()
298
+ if verbose:
299
+ print("Computing best candidates using CLVP and CVVP")
300
+ for batch in tqdm(samples, disable=not verbose):
301
  for i in range(batch.shape[0]):
302
  batch[i] = fix_autoregressive_output(batch[i], stop_mel_token)
303
  clvp = self.clvp(text.repeat(batch.shape[0], 1), batch, return_loss=False)
 
322
  return_latent=True, clip_inputs=False)
323
  self.autoregressive = self.autoregressive.cpu()
324
 
325
+ if verbose:
326
+ print("Transforming autoregressive outputs into audio..")
327
  wav_candidates = []
328
  self.diffusion = self.diffusion.cuda()
329
  self.vocoder = self.vocoder.cuda()
 
342
  latents = latents[:, :k]
343
  break
344
 
345
+ mel = do_spectrogram_diffusion(self.diffusion, diffuser, latents, voice_samples, temperature=diffusion_temperature, verbose=verbose)
346
  wav = self.vocoder.inference(mel)
347
  wav_candidates.append(wav.cpu())
348
  self.diffusion = self.diffusion.cpu()
tortoise_tts.ipynb CHANGED
@@ -17,13 +17,105 @@
17
  "accelerator": "GPU"
18
  },
19
  "cells": [
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  {
21
  "cell_type": "code",
22
  "execution_count": null,
23
  "metadata": {
24
- "id": "JrK20I32grP6"
 
 
 
 
25
  },
26
- "outputs": [],
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  "source": [
28
  "!git clone https://github.com/neonbjb/tortoise-tts.git\n",
29
  "%cd tortoise-tts\n",
@@ -38,58 +130,156 @@
38
  "import torchaudio\n",
39
  "import torch.nn as nn\n",
40
  "import torch.nn.functional as F\n",
41
- "from tqdm import tqdm\n",
42
  "\n",
43
- "from utils.tokenizer import VoiceBpeTokenizer\n",
44
- "from models.discrete_diffusion_vocoder import DiscreteDiffusionVocoder\n",
45
- "from models.text_voice_clip import VoiceCLIP\n",
46
- "from models.dvae import DiscreteVAE\n",
47
- "from models.autoregressive import UnifiedVoice\n",
48
  "\n",
49
- "# These have some fairly interesting code that is hidden in the colab. Consider checking it out.\n",
50
- "from do_tts import download_models, load_discrete_vocoder_diffuser, load_conditioning, fix_autoregressive_output, do_spectrogram_diffusion"
51
  ],
52
  "metadata": {
53
- "id": "Gen09NM4hONQ"
 
 
 
 
54
  },
55
  "execution_count": null,
56
- "outputs": []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  },
58
  {
59
  "cell_type": "code",
60
  "source": [
61
- "# Download pretrained models and set up pretrained voice bank. Feel free to upload and add your own voices here.\n",
62
- "# To do so, upload two WAV files cropped to 5-10 seconds of someone speaking.\n",
63
- "download_models()\n",
64
- "preselected_cond_voices = {\n",
65
- " # Male voices\n",
66
- " 'dotrice': ['voices/dotrice/1.wav', 'voices/dotrice/2.wav'],\n",
67
- " 'harris': ['voices/harris/1.wav', 'voices/harris/2.wav'],\n",
68
- " 'lescault': ['voices/lescault/1.wav', 'voices/lescault/2.wav'],\n",
69
- " 'otto': ['voices/otto/1.wav', 'voices/otto/2.wav'],\n",
70
- " # Female voices\n",
71
- " 'atkins': ['voices/atkins/1.wav', 'voices/atkins/2.wav'],\n",
72
- " 'grace': ['voices/grace/1.wav', 'voices/grace/2.wav'],\n",
73
- " 'kennard': ['voices/kennard/1.wav', 'voices/kennard/2.wav'],\n",
74
- " 'mol': ['voices/mol/1.wav', 'voices/mol/2.wav'],\n",
75
- " }"
76
  ],
77
  "metadata": {
78
- "id": "SSleVnRAiEE2"
 
 
 
 
79
  },
80
  "execution_count": null,
81
- "outputs": []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
  },
83
  {
84
  "cell_type": "code",
85
  "source": [
86
  "# This is the text that will be spoken.\n",
87
- "text = \"And took the other as just as fair, and having perhaps the better claim, because it was grassy and wanted wear.\"\n",
88
- "# This is the voice that will speak it.\n",
89
- "voice = 'atkins'\n",
90
- "# This is the number of samples we will generate from the DALLE-style model. More will produce better results, but will take longer to produce.\n",
91
- "# I don't recommend going less than 128.\n",
92
- "num_autoregressive_samples = 128"
 
 
 
 
 
 
 
 
93
  ],
94
  "metadata": {
95
  "id": "bt_aoxONjfL2"
@@ -100,149 +290,106 @@
100
  {
101
  "cell_type": "code",
102
  "source": [
103
- "# Prepare data.\n",
104
- "tokenizer = VoiceBpeTokenizer()\n",
105
- "text = torch.IntTensor(tokenizer.encode(text)).unsqueeze(0).cuda()\n",
106
- "text = F.pad(text, (0,1)) # This may not be necessary.\n",
107
- "cond_paths = preselected_cond_voices[voice]\n",
108
  "conds = []\n",
109
  "for cond_path in cond_paths:\n",
110
- " c, cond_wav = load_conditioning(cond_path)\n",
111
  " conds.append(c)\n",
112
- "conds = torch.stack(conds, dim=1) # And just use the last cond_wav for the diffusion model."
113
- ],
114
- "metadata": {
115
- "id": "KEXOKjIvn6NW"
116
- },
117
- "execution_count": null,
118
- "outputs": []
119
- },
120
- {
121
- "cell_type": "code",
122
- "source": [
123
- "# Load the autoregressive model.\n",
124
- "autoregressive = UnifiedVoice(max_mel_tokens=300, max_text_tokens=200, max_conditioning_inputs=2, layers=30, model_dim=1024,\n",
125
- " heads=16, number_text_tokens=256, start_text_token=255, checkpointing=False, train_solo_embeddings=False).cuda().eval()\n",
126
- "autoregressive.load_state_dict(torch.load('.models/autoregressive.pth'))\n",
127
- "stop_mel_token = autoregressive.stop_mel_token"
128
- ],
129
- "metadata": {
130
- "id": "Z15xFT_uhP8v"
131
- },
132
- "execution_count": null,
133
- "outputs": []
134
- },
135
- {
136
- "cell_type": "code",
137
- "source": [
138
- "# Perform inference with the autoregressive model, generating num_autoregressive_samples\n",
139
- "with torch.no_grad():\n",
140
- " samples = []\n",
141
- " for b in tqdm(range(num_autoregressive_samples // 16)):\n",
142
- " codes = autoregressive.inference_speech(conds, text, num_beams=1, repetition_penalty=1.0, do_sample=True, top_k=50, top_p=.95,\n",
143
- " temperature=.9, num_return_sequences=16, length_penalty=1)\n",
144
- " padding_needed = 250 - codes.shape[1]\n",
145
- " codes = F.pad(codes, (0, padding_needed), value=stop_mel_token)\n",
146
- " samples.append(codes)\n",
147
  "\n",
148
- "# Delete model weights to conserve memory.\n",
149
- "del autoregressive"
150
  ],
151
  "metadata": {
152
- "id": "xajqWiEik-j0"
 
 
 
 
153
  },
154
  "execution_count": null,
155
- "outputs": []
156
- },
157
- {
158
- "cell_type": "code",
159
- "source": [
160
- "# Load the CLIP model.\n",
161
- "clip = VoiceCLIP(dim_text=512, dim_speech=512, dim_latent=512, num_text_tokens=256, text_enc_depth=8, text_seq_len=120, text_heads=8,\n",
162
- " num_speech_tokens=8192, speech_enc_depth=10, speech_heads=8, speech_seq_len=250).cuda().eval()\n",
163
- "clip.load_state_dict(torch.load('.models/clip.pth'))"
164
- ],
165
- "metadata": {
166
- "id": "KNgYSyuyliMs"
167
- },
168
- "execution_count": null,
169
- "outputs": []
 
 
 
 
 
 
 
 
 
 
 
 
170
  },
171
  {
172
  "cell_type": "code",
173
  "source": [
174
- "# Use the CLIP model to select the best autoregressive output to match the given text.\n",
175
- "clip_results = []\n",
176
- "with torch.no_grad():\n",
177
- " for batch in samples:\n",
178
- " for i in range(batch.shape[0]):\n",
179
- " batch[i] = fix_autoregressive_output(batch[i], stop_mel_token)\n",
180
- " text = text[:, :120] # Ugly hack to fix the fact that I didn't train CLIP to handle long enough text.\n",
181
- " clip_results.append(clip(text.repeat(batch.shape[0], 1),\n",
182
- " torch.full((batch.shape[0],), fill_value=text.shape[1]-1, dtype=torch.long, device='cuda'),\n",
183
- " batch, torch.full((batch.shape[0],), fill_value=batch.shape[1]*1024, dtype=torch.long, device='cuda'),\n",
184
- " return_loss=False))\n",
185
- " clip_results = torch.cat(clip_results, dim=0)\n",
186
- " samples = torch.cat(samples, dim=0)\n",
187
- " best_results = samples[torch.topk(clip_results, k=1).indices]\n",
188
  "\n",
189
- "# Save samples to CPU memory, delete clip to conserve memory.\n",
190
- "samples = samples.cpu()\n",
191
- "del clip"
192
- ],
193
- "metadata": {
194
- "id": "DDXkM0lclp4U"
195
- },
196
- "execution_count": null,
197
- "outputs": []
198
- },
199
- {
200
- "cell_type": "code",
201
- "source": [
202
- "# Load the DVAE and diffusion model.\n",
203
- "dvae = DiscreteVAE(positional_dims=1, channels=80, hidden_dim=512, num_resnet_blocks=3, codebook_dim=512, num_tokens=8192, num_layers=2,\n",
204
- " record_codes=True, kernel_size=3, use_transposed_convs=False).cuda().eval()\n",
205
- "dvae.load_state_dict(torch.load('.models/dvae.pth'), strict=False)\n",
206
- "diffusion = DiscreteDiffusionVocoder(model_channels=128, dvae_dim=80, channel_mult=[1, 1, 1.5, 2, 3, 4, 6, 8, 8, 8, 8], num_res_blocks=[1, 2, 2, 2, 2, 2, 2, 2, 2, 1, 1],\n",
207
- " spectrogram_conditioning_resolutions=[2,512], attention_resolutions=[512,1024], num_heads=4, kernel_size=3, scale_factor=2,\n",
208
- " conditioning_inputs_provided=True, time_embed_dim_multiplier=4).cuda().eval()\n",
209
- "diffusion.load_state_dict(torch.load('.models/diffusion.pth'))\n",
210
- "diffuser = load_discrete_vocoder_diffuser(desired_diffusion_steps=100)"
211
- ],
212
- "metadata": {
213
- "id": "97acSnBal8Q2"
214
- },
215
- "execution_count": null,
216
- "outputs": []
217
- },
218
- {
219
- "cell_type": "code",
220
- "source": [
221
- "# Decode the (best) discrete sequence created by the autoregressive model.\n",
222
- "with torch.no_grad():\n",
223
- " for b in range(best_results.shape[0]):\n",
224
- " code = best_results[b].unsqueeze(0)\n",
225
- " wav = do_spectrogram_diffusion(diffusion, dvae, diffuser, code, cond_wav, spectrogram_compression_factor=256, mean=True)\n",
226
- " torchaudio.save(f'{voice}_{b}.wav', wav.squeeze(0).cpu(), 22050)"
227
- ],
228
- "metadata": {
229
- "id": "HEDABTrdl_kM"
230
- },
231
- "execution_count": null,
232
- "outputs": []
233
- },
234
- {
235
- "cell_type": "code",
236
- "source": [
237
- "# Listen to your text! (told you that'd take a long time..)\n",
238
- "from IPython.display import Audio\n",
239
- "Audio(data=wav.squeeze(0).cpu().numpy(), rate=22050)"
240
  ],
241
  "metadata": {
242
- "id": "EyHmcdqBmSvf"
 
 
 
 
243
  },
244
  "execution_count": null,
245
- "outputs": []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
246
  }
247
  ]
248
  }
 
17
  "accelerator": "GPU"
18
  },
19
  "cells": [
20
+ {
21
+ "cell_type": "markdown",
22
+ "source": [
23
+ "Welcome to Tortoise! 🐒🐒🐒🐒\n",
24
+ "\n",
25
+ "Before you begin, I **strongly** recommend you turn on a GPU runtime.\n",
26
+ "\n",
27
+ "There's a reason this is called \"Tortoise\" - this model takes up to a minute to perform inference for a single sentence on a GPU. Expect waits on the order of hours on a CPU."
28
+ ],
29
+ "metadata": {
30
+ "id": "_pIZ3ZXNp7cf"
31
+ }
32
+ },
33
  {
34
  "cell_type": "code",
35
  "execution_count": null,
36
  "metadata": {
37
+ "id": "JrK20I32grP6",
38
+ "colab": {
39
+ "base_uri": "https://localhost:8080/"
40
+ },
41
+ "outputId": "44f55dca-5d0a-405e-a4cc-54bc8e16b780"
42
  },
43
+ "outputs": [
44
+ {
45
+ "output_type": "stream",
46
+ "name": "stdout",
47
+ "text": [
48
+ "Cloning into 'tortoise-tts'...\n",
49
+ "remote: Enumerating objects: 736, done.\u001b[K\n",
50
+ "remote: Counting objects: 100% (23/23), done.\u001b[K\n",
51
+ "remote: Compressing objects: 100% (15/15), done.\u001b[K\n",
52
+ "remote: Total 736 (delta 10), reused 20 (delta 8), pack-reused 713\u001b[K\n",
53
+ "Receiving objects: 100% (736/736), 348.62 MiB | 24.08 MiB/s, done.\n",
54
+ "Resolving deltas: 100% (161/161), done.\n",
55
+ "/content/tortoise-tts\n",
56
+ "Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 1)) (1.10.0+cu111)\n",
57
+ "Requirement already satisfied: torchaudio in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 2)) (0.10.0+cu111)\n",
58
+ "Collecting rotary_embedding_torch\n",
59
+ " Downloading rotary_embedding_torch-0.1.5-py3-none-any.whl (4.1 kB)\n",
60
+ "Collecting transformers\n",
61
+ " Downloading transformers-4.18.0-py3-none-any.whl (4.0 MB)\n",
62
+ "\u001b[K |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4.0 MB 5.3 MB/s \n",
63
+ "\u001b[?25hCollecting tokenizers\n",
64
+ " Downloading tokenizers-0.12.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (6.6 MB)\n",
65
+ "\u001b[K |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6.6 MB 31.3 MB/s \n",
66
+ "\u001b[?25hRequirement already satisfied: inflect in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 6)) (2.1.0)\n",
67
+ "Collecting progressbar\n",
68
+ " Downloading progressbar-2.5.tar.gz (10 kB)\n",
69
+ "Collecting einops\n",
70
+ " Downloading einops-0.4.1-py3-none-any.whl (28 kB)\n",
71
+ "Collecting unidecode\n",
72
+ " Downloading Unidecode-1.3.4-py3-none-any.whl (235 kB)\n",
73
+ "\u001b[K |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 235 kB 44.3 MB/s \n",
74
+ "\u001b[?25hCollecting entmax\n",
75
+ " Downloading entmax-1.0.tar.gz (7.2 kB)\n",
76
+ "Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch->-r requirements.txt (line 1)) (4.1.1)\n",
77
+ "Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers->-r requirements.txt (line 4)) (4.64.0)\n",
78
+ "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers->-r requirements.txt (line 4)) (21.3)\n",
79
+ "Collecting sacremoses\n",
80
+ " Downloading sacremoses-0.0.49-py3-none-any.whl (895 kB)\n",
81
+ "\u001b[K |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 895 kB 36.6 MB/s \n",
82
+ "\u001b[?25hCollecting huggingface-hub<1.0,>=0.1.0\n",
83
+ " Downloading huggingface_hub-0.5.1-py3-none-any.whl (77 kB)\n",
84
+ "\u001b[K |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 77 kB 6.3 MB/s \n",
85
+ "\u001b[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers->-r requirements.txt (line 4)) (3.6.0)\n",
86
+ "Collecting pyyaml>=5.1\n",
87
+ " Downloading PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (596 kB)\n",
88
+ "\u001b[K |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 596 kB 38.9 MB/s \n",
89
+ "\u001b[?25hRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers->-r requirements.txt (line 4)) (1.21.6)\n",
90
+ "Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers->-r requirements.txt (line 4)) (2.23.0)\n",
91
+ "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers->-r requirements.txt (line 4)) (2019.12.20)\n",
92
+ "Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers->-r requirements.txt (line 4)) (4.11.3)\n",
93
+ "Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers->-r requirements.txt (line 4)) (3.0.8)\n",
94
+ "Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers->-r requirements.txt (line 4)) (3.8.0)\n",
95
+ "Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers->-r requirements.txt (line 4)) (1.24.3)\n",
96
+ "Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers->-r requirements.txt (line 4)) (3.0.4)\n",
97
+ "Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers->-r requirements.txt (line 4)) (2.10)\n",
98
+ "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers->-r requirements.txt (line 4)) (2021.10.8)\n",
99
+ "Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers->-r requirements.txt (line 4)) (1.15.0)\n",
100
+ "Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers->-r requirements.txt (line 4)) (1.1.0)\n",
101
+ "Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers->-r requirements.txt (line 4)) (7.1.2)\n",
102
+ "Building wheels for collected packages: progressbar, entmax\n",
103
+ " Building wheel for progressbar (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
104
+ " Created wheel for progressbar: filename=progressbar-2.5-py3-none-any.whl size=12082 sha256=bb7d90605d0bf4d89aedc46bd8ed39538f55e00ee70fa382c1af81f142f08fa8\n",
105
+ " Stored in directory: /root/.cache/pip/wheels/f0/fd/1f/3e35ed57e94cd8ced38dd46771f1f0f94f65fec548659ed855\n",
106
+ " Building wheel for entmax (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
107
+ " Created wheel for entmax: filename=entmax-1.0-py3-none-any.whl size=11015 sha256=5e2cf723e790ec941984d2030eb3231e1ae3ce75231709391a13edcd2bfb4770\n",
108
+ " Stored in directory: /root/.cache/pip/wheels/f7/e8/0d/acc29c2f66e69a1f42483347fa8545c293dec12325ee161716\n",
109
+ "Successfully built progressbar entmax\n",
110
+ "Installing collected packages: pyyaml, tokenizers, sacremoses, huggingface-hub, einops, unidecode, transformers, rotary-embedding-torch, progressbar, entmax\n",
111
+ " Attempting uninstall: pyyaml\n",
112
+ " Found existing installation: PyYAML 3.13\n",
113
+ " Uninstalling PyYAML-3.13:\n",
114
+ " Successfully uninstalled PyYAML-3.13\n",
115
+ "Successfully installed einops-0.4.1 entmax-1.0 huggingface-hub-0.5.1 progressbar-2.5 pyyaml-6.0 rotary-embedding-torch-0.1.5 sacremoses-0.0.49 tokenizers-0.12.1 transformers-4.18.0 unidecode-1.3.4\n"
116
+ ]
117
+ }
118
+ ],
119
  "source": [
120
  "!git clone https://github.com/neonbjb/tortoise-tts.git\n",
121
  "%cd tortoise-tts\n",
 
130
  "import torchaudio\n",
131
  "import torch.nn as nn\n",
132
  "import torch.nn.functional as F\n",
 
133
  "\n",
134
+ "from api import TextToSpeech\n",
135
+ "from utils.audio import load_audio, get_voices\n",
 
 
 
136
  "\n",
137
+ "# This will download all the models used by Tortoise from the HF hub.\n",
138
+ "tts = TextToSpeech()"
139
  ],
140
  "metadata": {
141
+ "id": "Gen09NM4hONQ",
142
+ "colab": {
143
+ "base_uri": "https://localhost:8080/"
144
+ },
145
+ "outputId": "35c1fb4b-5998-4e75-9ec9-29521b301db6"
146
  },
147
  "execution_count": null,
148
+ "outputs": [
149
+ {
150
+ "output_type": "stream",
151
+ "name": "stdout",
152
+ "text": [
153
+ "Downloading autoregressive.pth from https://huggingface.co/jbetker/tortoise-tts-v2/resolve/hf/.models/autoregressive.pth...\n"
154
+ ]
155
+ },
156
+ {
157
+ "output_type": "stream",
158
+ "name": "stderr",
159
+ "text": [
160
+ "\n"
161
+ ]
162
+ },
163
+ {
164
+ "output_type": "stream",
165
+ "name": "stdout",
166
+ "text": [
167
+ "Done.\n",
168
+ "Downloading clvp.pth from https://huggingface.co/jbetker/tortoise-tts-v2/resolve/hf/.models/clvp.pth...\n"
169
+ ]
170
+ },
171
+ {
172
+ "output_type": "stream",
173
+ "name": "stderr",
174
+ "text": [
175
+ "\n"
176
+ ]
177
+ },
178
+ {
179
+ "output_type": "stream",
180
+ "name": "stdout",
181
+ "text": [
182
+ "Done.\n",
183
+ "Downloading cvvp.pth from https://huggingface.co/jbetker/tortoise-tts-v2/resolve/hf/.models/cvvp.pth...\n"
184
+ ]
185
+ },
186
+ {
187
+ "output_type": "stream",
188
+ "name": "stderr",
189
+ "text": [
190
+ "\n"
191
+ ]
192
+ },
193
+ {
194
+ "output_type": "stream",
195
+ "name": "stdout",
196
+ "text": [
197
+ "Done.\n",
198
+ "Downloading diffusion_decoder.pth from https://huggingface.co/jbetker/tortoise-tts-v2/resolve/hf/.models/diffusion_decoder.pth...\n"
199
+ ]
200
+ },
201
+ {
202
+ "output_type": "stream",
203
+ "name": "stderr",
204
+ "text": [
205
+ "\n"
206
+ ]
207
+ },
208
+ {
209
+ "output_type": "stream",
210
+ "name": "stdout",
211
+ "text": [
212
+ "Done.\n",
213
+ "Downloading vocoder.pth from https://huggingface.co/jbetker/tortoise-tts-v2/resolve/hf/.models/vocoder.pth...\n"
214
+ ]
215
+ },
216
+ {
217
+ "output_type": "stream",
218
+ "name": "stderr",
219
+ "text": [
220
+ "\n"
221
+ ]
222
+ },
223
+ {
224
+ "output_type": "stream",
225
+ "name": "stdout",
226
+ "text": [
227
+ "Done.\n",
228
+ "Removing weight norm...\n"
229
+ ]
230
+ }
231
+ ]
232
  },
233
  {
234
  "cell_type": "code",
235
  "source": [
236
+ "# List all the voices available. These are just some random clips I've gathered\n",
237
+ "# from the internet as well as a few voices from the training dataset.\n",
238
+ "# Feel free to add your own clips to the voices/ folder.\n",
239
+ "%ls voices"
 
 
 
 
 
 
 
 
 
 
 
240
  ],
241
  "metadata": {
242
+ "id": "SSleVnRAiEE2",
243
+ "colab": {
244
+ "base_uri": "https://localhost:8080/"
245
+ },
246
+ "outputId": "e1eb09e2-1b68-4f81-b679-edb97538da39"
247
  },
248
  "execution_count": null,
249
+ "outputs": [
250
+ {
251
+ "output_type": "stream",
252
+ "name": "stdout",
253
+ "text": [
254
+ "\u001b[0m\u001b[01;34mangelina_jolie\u001b[0m/ \u001b[01;34mhalle_barry\u001b[0m/ \u001b[01;34mlj\u001b[0m/ \u001b[01;34msamuel_jackson\u001b[0m/\n",
255
+ "\u001b[01;34matkins\u001b[0m/ \u001b[01;34mharris\u001b[0m/ \u001b[01;34mmol\u001b[0m/ \u001b[01;34msigourney_weaver\u001b[0m/\n",
256
+ "\u001b[01;34mcarlin\u001b[0m/ \u001b[01;34mhenry_cavill\u001b[0m/ \u001b[01;34mmorgan_freeman\u001b[0m/ \u001b[01;34mtom_hanks\u001b[0m/\n",
257
+ "\u001b[01;34mdaniel_craig\u001b[0m/ \u001b[01;34mjennifer_lawrence\u001b[0m/ \u001b[01;34mmyself\u001b[0m/ \u001b[01;34mwilliam_shatner\u001b[0m/\n",
258
+ "\u001b[01;34mdotrice\u001b[0m/ \u001b[01;34mjohn_krasinski\u001b[0m/ \u001b[01;34motto\u001b[0m/\n",
259
+ "\u001b[01;34memma_stone\u001b[0m/ \u001b[01;34mkennard\u001b[0m/ \u001b[01;34mpatrick_stewart\u001b[0m/\n",
260
+ "\u001b[01;34mgrace\u001b[0m/ \u001b[01;34mlescault\u001b[0m/ \u001b[01;34mrobert_deniro\u001b[0m/\n"
261
+ ]
262
+ }
263
+ ]
264
  },
265
  {
266
  "cell_type": "code",
267
  "source": [
268
  "# This is the text that will be spoken.\n",
269
+ "text = \"Joining two modalities results in a surprising increase in generalization! What would happen if we combined them all?\"\n",
270
+ "\n",
271
+ "# Here's something for the poetically inclined.. (set text=)\n",
272
+ "\"\"\"\n",
273
+ "Then took the other, as just as fair,\n",
274
+ "And having perhaps the better claim,\n",
275
+ "Because it was grassy and wanted wear;\n",
276
+ "Though as for that the passing there\n",
277
+ "Had worn them really about the same,\"\"\"\n",
278
+ "\n",
279
+ "# Pick one of the voices from above\n",
280
+ "voice = 'dotrice'\n",
281
+ "# Pick a \"preset mode\" to determine quality. Options: {\"ultra_fast\", \"fast\" (default), \"standard\", \"high_quality\"}. See docs in api.py\n",
282
+ "preset = \"fast\""
283
  ],
284
  "metadata": {
285
  "id": "bt_aoxONjfL2"
 
290
  {
291
  "cell_type": "code",
292
  "source": [
293
+ "# Fetch the voice references and forward execute!\n",
294
+ "voices = get_voices()\n",
295
+ "cond_paths = voices[voice]\n",
 
 
296
  "conds = []\n",
297
  "for cond_path in cond_paths:\n",
298
+ " c = load_audio(cond_path, 22050)\n",
299
  " conds.append(c)\n",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
300
  "\n",
301
+ "gen = tts.tts_with_preset(text, conds, preset)\n",
302
+ "torchaudio.save('generated.wav', gen.squeeze(0).cpu(), 24000)"
303
  ],
304
  "metadata": {
305
+ "id": "KEXOKjIvn6NW",
306
+ "colab": {
307
+ "base_uri": "https://localhost:8080/"
308
+ },
309
+ "outputId": "7977bfd7-9fbc-41f7-d3ac-25fd4e350049"
310
  },
311
  "execution_count": null,
312
+ "outputs": [
313
+ {
314
+ "output_type": "stream",
315
+ "name": "stderr",
316
+ "text": [
317
+ "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [01:18<00:00, 13.11s/it]\n",
318
+ "/usr/local/lib/python3.7/dist-packages/torch/utils/checkpoint.py:25: UserWarning: None of the inputs have requires_grad=True. Gradients will be None\n",
319
+ " warnings.warn(\"None of the inputs have requires_grad=True. Gradients will be None\")\n",
320
+ "/content/tortoise-tts/models/autoregressive.py:359: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
321
+ " mel_lengths = wav_lengths // self.mel_length_compression\n"
322
+ ]
323
+ },
324
+ {
325
+ "output_type": "stream",
326
+ "name": "stdout",
327
+ "text": [
328
+ "Performing vocoding..\n"
329
+ ]
330
+ },
331
+ {
332
+ "output_type": "stream",
333
+ "name": "stderr",
334
+ "text": [
335
+ "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 32/32 [00:16<00:00, 1.94it/s]\n"
336
+ ]
337
+ }
338
+ ]
339
  },
340
  {
341
  "cell_type": "code",
342
  "source": [
343
+ "# You can add as many conditioning voices as you want together. Combining\n",
344
+ "# clips from multiple voices takes the mean of the latent space for all\n",
345
+ "# voices. This creates a novel voice that is a combination of the two inputs.\n",
346
+ "#\n",
347
+ "# Lets see what it would sound like if Picard and Kirk had a kid with a penchant for philosophy:\n",
348
+ "conds = []\n",
349
+ "for v in ['patrick_stewart', 'william_shatner']:\n",
350
+ " cond_paths = voices[v]\n",
351
+ " for cond_path in cond_paths:\n",
352
+ " c = load_audio(cond_path, 22050)\n",
353
+ " conds.append(c)\n",
 
 
 
354
  "\n",
355
+ "gen = tts.tts_with_preset(\"They used to say that if man was meant to fly, he’d have wings. But he did fly. He discovered he had to.\", conds, preset)\n",
356
+ "torchaudio.save('captain_kirkard.wav', gen.squeeze(0).cpu(), 24000)"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
357
  ],
358
  "metadata": {
359
+ "colab": {
360
+ "base_uri": "https://localhost:8080/"
361
+ },
362
+ "id": "fYTk8KUezUr5",
363
+ "outputId": "8a07f251-c90f-4e6a-c204-132b737dfff8"
364
  },
365
  "execution_count": null,
366
+ "outputs": [
367
+ {
368
+ "output_type": "stream",
369
+ "name": "stderr",
370
+ "text": [
371
+ "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [01:45<00:00, 17.62s/it]\n",
372
+ "/usr/local/lib/python3.7/dist-packages/torch/utils/checkpoint.py:25: UserWarning: None of the inputs have requires_grad=True. Gradients will be None\n",
373
+ " warnings.warn(\"None of the inputs have requires_grad=True. Gradients will be None\")\n",
374
+ "/content/tortoise-tts/models/autoregressive.py:359: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').\n",
375
+ " mel_lengths = wav_lengths // self.mel_length_compression\n"
376
+ ]
377
+ },
378
+ {
379
+ "output_type": "stream",
380
+ "name": "stdout",
381
+ "text": [
382
+ "Performing vocoding..\n"
383
+ ]
384
+ },
385
+ {
386
+ "output_type": "stream",
387
+ "name": "stderr",
388
+ "text": [
389
+ "100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 32/32 [00:16<00:00, 2.00it/s]\n"
390
+ ]
391
+ }
392
+ ]
393
  }
394
  ]
395
  }
tortoise_v2_examples.html ADDED
File without changes
utils/diffusion.py CHANGED
@@ -605,7 +605,7 @@ class GaussianDiffusion:
605
  img = th.randn(*shape, device=device)
606
  indices = list(range(self.num_timesteps))[::-1]
607
 
608
- for i in tqdm(indices):
609
  t = th.tensor([i] * shape[0], device=device)
610
  with th.no_grad():
611
  out = self.p_sample(
@@ -774,7 +774,7 @@ class GaussianDiffusion:
774
  # Lazy import so that we don't depend on tqdm.
775
  from tqdm.auto import tqdm
776
 
777
- indices = tqdm(indices)
778
 
779
  for i in indices:
780
  t = th.tensor([i] * shape[0], device=device)
 
605
  img = th.randn(*shape, device=device)
606
  indices = list(range(self.num_timesteps))[::-1]
607
 
608
+ for i in tqdm(indices, disable=not progress):
609
  t = th.tensor([i] * shape[0], device=device)
610
  with th.no_grad():
611
  out = self.p_sample(
 
774
  # Lazy import so that we don't depend on tqdm.
775
  from tqdm.auto import tqdm
776
 
777
+ indices = tqdm(indices, disable=not progress)
778
 
779
  for i in indices:
780
  t = th.tensor([i] * shape[0], device=device)