TheStinger commited on
Commit
2625edb
β€’
1 Parent(s): 68173e6

Upload 11 files

Browse files
Files changed (4) hide show
  1. README.md +9 -73
  2. app.py +504 -363
  3. model_handler.py +155 -0
  4. requirements.txt +3 -0
README.md CHANGED
@@ -1,73 +1,9 @@
1
- ---
2
- title: Ilaria RVC
3
- emoji: 😻
4
- colorFrom: pink
5
- colorTo: pink
6
- sdk: gradio
7
- app_file: app.py
8
- pinned: true
9
- ---
10
-
11
- ![Ilaria AI Suite](./ilariaaisuite.png)
12
- ***
13
- [![Static Badge](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Space-s?labelColor=YELLOW&color=FFEA00)](https://huggingface.co/spaces/TheStinger/Ilaria_RVC) [![Static Badge](https://img.shields.io/badge/%F0%9F%A4%97%20HF%20Space-Duplication-s?labelColor=YELLOW&color=FFEA00)](https://huggingface.co/spaces/TheStinger/Ilaria_RVC?duplicate=true) [![Static Badge](https://img.shields.io/badge/GitHub-Source%20Code-s?logo=GitHub)](https://github.com/TheStingerX/Ilaria-RVC) [![Static Badge](https://img.shields.io/badge/AI%20Hub-Discord%20Server-s?logo=Discord&color=%09%237289da)](https://discord.gg/aihub) [![Static Badge](https://img.shields.io/badge/Ko--Fi-s?logo=Ko-Fi&label=Support%20me%20on&labelColor=434b57&color=FF5E5B)](https://ko-fi.com/ilariaowo)
14
- ***
15
- <p align="center">
16
- <h1>Ilaria RVC πŸ’–</h1>
17
- </p>
18
-
19
- πŸŽ‰ Welcome to Ilaria RVC! πŸŽ‰
20
-
21
- This project leverages various libraries and modules to create a Graphical User Interface (GUI) for voice conversion.
22
- It's primarily designed for use with HuggingFace Spaces. πŸ€—
23
-
24
- Ilaria RVC is part of the Ilaria AI Suite wich includes various easy and powerful tools. πŸ’–
25
-
26
- ## πŸ“¦ Installation πŸ“¦
27
-
28
- To use this project, clone the original Space on Hugging Face.
29
- Make sure you restart it time to time to keep up with the new updates.
30
-
31
- ## πŸ–₯️ Usage πŸ–₯️
32
-
33
- Once the dependencies are installed automatically, Hugging Face will use app.py to start the user interface.
34
- From there, you can utilize the various features of the project.
35
-
36
- ## 🌟 Features 🌟
37
-
38
- Ilaria RVC offers a range of features, including:
39
-
40
- - πŸŽ™οΈ **Convert audio with a desired voice model**:
41
- With Ilaria RVC, you can transform any audio using the voice model you prefer. It’s like having a personal voice-over artist at your fingertips.
42
-
43
- - πŸ’Ύ **Download a voice model directly from the interface**:
44
- You can directly download models with the download without using any other interface, How convenient is that?
45
-
46
- - πŸš€ **Advanced and cutting-edge options for conversion**:
47
- Ilaria RVC offers conversion options that are at the forefront of AI. You can tailor your experience to your specific needs.
48
-
49
- - πŸ› οΈ **Constantly updated by Ilaria and AI Hub engineers**:
50
- Ilaria RVC is a product in constant evolution. Ilaria and the team of AI Hub engineers are constantly working to improve and update the system.
51
-
52
- - πŸ—£οΈ **A choice of 3 different TTS models including Ilaria TTS**:
53
- You’re spoilt for choice with Ilaria RVC. You can choose from three different voice synthesis models, including Ilaria TTS.
54
-
55
- - βœ”οΈ **Ease of use for inexperienced users**:
56
- Don’t worry if you’re not a tech whiz. Ilaria RVC is designed to be easy to use for everyone, regardless of their level of experience.
57
-
58
- ## πŸ™ Credits πŸ™
59
-
60
- - **Rejekt** - Original UI coder
61
- - [**Kit Lemonfoot**](https://huggingface.co/Kit-Lemonfoot) - Implemented Ilaria TTS
62
- - [**GatienDoesStuff**](https://github.com/GatienDoesStuff) - For helping with the Gradio UI
63
-
64
- ## 🀝 Contributing 🀝
65
-
66
- Interested in contributing to this project? Ilaria is always looking for collaborators.
67
- Feel free to open a pull request on Hugging Face.
68
-
69
- ## πŸ“„ License πŸ“„
70
-
71
- This project is released under the `INCU` license.
72
- For more details, please check the license file.
73
- For further questions feel free to contact Ilaria.
 
1
+ ---
2
+ title: Ilaria RVC Beta
3
+ emoji: 😻
4
+ colorFrom: pink
5
+ colorTo: pink
6
+ sdk: gradio
7
+ app_file: app.py
8
+ pinned: true
9
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
app.py CHANGED
@@ -1,363 +1,504 @@
1
- import gradio as gr
2
- import requests
3
- import random
4
- import os
5
- import zipfile
6
- import librosa
7
- import time
8
- from infer_rvc_python import BaseLoader
9
- from pydub import AudioSegment
10
- from tts_voice import tts_order_voice
11
- import edge_tts
12
- import tempfile
13
- import anyio
14
- import asyncio
15
- from audio_separator.separator import Separator
16
-
17
- language_dict = tts_order_voice
18
-
19
- async def text_to_speech_edge(text, language_code):
20
- voice = language_dict[language_code]
21
- communicate = edge_tts.Communicate(text, voice)
22
- with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as tmp_file:
23
- tmp_path = tmp_file.name
24
-
25
- await communicate.save(tmp_path)
26
-
27
- return tmp_path
28
-
29
- # fucking dogshit toggle
30
- try:
31
- import spaces
32
- spaces_status = True
33
- except ImportError:
34
- spaces_status = False
35
-
36
- separator = Separator()
37
- converter = BaseLoader(only_cpu=False, hubert_path=None, rmvpe_path=None)
38
-
39
- global pth_file
40
- global index_file
41
-
42
- pth_file = "model.pth"
43
- index_file = "model.index"
44
-
45
- #CONFIGS
46
- TEMP_DIR = "temp"
47
- MODEL_PREFIX = "model"
48
- PITCH_ALGO_OPT = [
49
- "pm",
50
- "harvest",
51
- "crepe",
52
- "rmvpe",
53
- "rmvpe+",
54
- ]
55
-
56
- os.makedirs(TEMP_DIR, exist_ok=True)
57
-
58
- def unzip_file(file):
59
- filename = os.path.basename(file).split(".")[0]
60
- with zipfile.ZipFile(file, 'r') as zip_ref:
61
- zip_ref.extractall(os.path.join(TEMP_DIR, filename))
62
- return True
63
-
64
- def get_training_info(audio_file):
65
- if audio_file is None:
66
- return 'Please provide an audio file!'
67
- duration = get_audio_duration(audio_file)
68
- sample_rate = wave.open(audio_file, 'rb').getframerate()
69
-
70
- training_info = {
71
- (0, 2): (150, 'OV2'),
72
- (2, 3): (200, 'OV2'),
73
- (3, 5): (250, 'OV2'),
74
- (5, 10): (300, 'Normal'),
75
- (10, 25): (500, 'Normal'),
76
- (25, 45): (700, 'Normal'),
77
- (45, 60): (1000, 'Normal')
78
- }
79
-
80
- for (min_duration, max_duration), (epochs, pretrain) in training_info.items():
81
- if min_duration <= duration < max_duration:
82
- break
83
- else:
84
- return 'Duration is not within the specified range!'
85
-
86
- return f'You should use the **{pretrain}** pretrain with **{epochs}** epochs at **{sample_rate/1000}khz** sample rate.'
87
-
88
- def on_button_click(audio_file_path):
89
- return get_training_info(audio_file_path)
90
-
91
- def get_audio_duration(audio_file_path):
92
- audio_info = sf.info(audio_file_path)
93
- duration_minutes = audio_info.duration / 60
94
- return duration_minutes
95
-
96
- def progress_bar(total, current): # best progress bar ever trust me sunglasses emoji 😎
97
- return "[" + "=" * int(current / total * 20) + ">" + " " * (20 - int(current / total * 20)) + "] " + str(int(current / total * 100)) + "%"
98
-
99
- def download_from_url(url, filename=None):
100
- if "/blob/" in url:
101
- url = url.replace("/blob/", "/resolve/") # made it delik proof 😎
102
- if "huggingface" not in url:
103
- return ["The URL must be from huggingface", "Failed", "Failed"]
104
- if filename is None:
105
- filename = os.path.join(TEMP_DIR, MODEL_PREFIX + str(random.randint(1, 1000)) + ".zip")
106
- response = requests.get(url)
107
- total = int(response.headers.get('content-length', 0)) # bytes to download (length of the file)
108
- if total > 500000000:
109
-
110
- return ["The file is too large. You can only download files up to 500 MB in size.", "Failed", "Failed"]
111
- current = 0
112
- with open(filename, "wb") as f:
113
- for data in response.iter_content(chunk_size=4096):
114
- f.write(data)
115
- current += len(data)
116
- print(progress_bar(total, current), end="\r")
117
-
118
-
119
-
120
- try:
121
- unzip_file(filename)
122
- except Exception as e:
123
- return ["Failed to unzip the file", "Failed", "Failed"]
124
- unzipped_dir = os.path.join(TEMP_DIR, os.path.basename(filename).split(".")[0])
125
- pth_files = []
126
- index_files = []
127
- for root, dirs, files in os.walk(unzipped_dir):
128
- for file in files:
129
- if file.endswith(".pth"):
130
- pth_files.append(os.path.join(root, file))
131
- elif file.endswith(".index"):
132
- index_files.append(os.path.join(root, file))
133
-
134
- print(pth_files, index_files)
135
- global pth_file
136
- global index_file
137
- pth_file = pth_files[0]
138
- index_file = index_files[0]
139
-
140
- pth_file_ui.value = pth_file
141
- index_file_ui.value = index_file
142
- print(pth_file_ui.value)
143
- print(index_file_ui.value)
144
- return ["Downloaded as " + filename, pth_files[0], index_files[0]]
145
-
146
- def inference(audio, model_name):
147
- output_data = inf_handler(audio, model_name)
148
- vocals = output_data[0]
149
- inst = output_data[1]
150
-
151
- return vocals, inst
152
-
153
- if spaces_status:
154
- @spaces.GPU()
155
- def convert_now(audio_files, random_tag, converter):
156
- return converter(
157
- audio_files,
158
- random_tag,
159
- overwrite=False,
160
- parallel_workers=8
161
- )
162
-
163
-
164
- else:
165
- def convert_now(audio_files, random_tag, converter):
166
- return converter(
167
- audio_files,
168
- random_tag,
169
- overwrite=False,
170
- parallel_workers=8
171
- )
172
-
173
- def calculate_remaining_time(epochs, seconds_per_epoch):
174
- total_seconds = epochs * seconds_per_epoch
175
-
176
- hours = total_seconds // 3600
177
- minutes = (total_seconds % 3600) // 60
178
- seconds = total_seconds % 60
179
-
180
- if hours == 0:
181
- return f"{int(minutes)} minutes"
182
- elif hours == 1:
183
- return f"{int(hours)} hour and {int(minutes)} minutes"
184
- else:
185
- return f"{int(hours)} hours and {int(minutes)} minutes"
186
-
187
- def inf_handler(audio, model_name):
188
- model_found = False
189
- for model_info in UVR_5_MODELS:
190
- if model_info["model_name"] == model_name:
191
- separator.load_model(model_info["checkpoint"])
192
- model_found = True
193
- break
194
- if not model_found:
195
- separator.load_model()
196
- output_files = separator.separate(audio)
197
- vocals = output_files[0]
198
- inst = output_files[1]
199
- return vocals, inst
200
-
201
-
202
- def run(
203
- audio_files,
204
- pitch_alg,
205
- pitch_lvl,
206
- index_inf,
207
- r_m_f,
208
- e_r,
209
- c_b_p,
210
- ):
211
- if not audio_files:
212
- raise ValueError("The audio pls")
213
-
214
- if isinstance(audio_files, str):
215
- audio_files = [audio_files]
216
-
217
- try:
218
- duration_base = librosa.get_duration(filename=audio_files[0])
219
- print("Duration:", duration_base)
220
- except Exception as e:
221
- print(e)
222
-
223
- random_tag = "USER_"+str(random.randint(10000000, 99999999))
224
-
225
- file_m = pth_file_ui.value
226
- file_index = index_file_ui.value
227
-
228
- print("Random tag:", random_tag)
229
- print("File model:", file_m)
230
- print("Pitch algorithm:", pitch_alg)
231
- print("Pitch level:", pitch_lvl)
232
- print("File index:", file_index)
233
- print("Index influence:", index_inf)
234
- print("Respiration median filtering:", r_m_f)
235
- print("Envelope ratio:", e_r)
236
-
237
- converter.apply_conf(
238
- tag=random_tag,
239
- file_model=file_m,
240
- pitch_algo=pitch_alg,
241
- pitch_lvl=pitch_lvl,
242
- file_index=file_index,
243
- index_influence=index_inf,
244
- respiration_median_filtering=r_m_f,
245
- envelope_ratio=e_r,
246
- consonant_breath_protection=c_b_p,
247
- resample_sr=44100 if audio_files[0].endswith('.mp3') else 0,
248
- )
249
- time.sleep(0.1)
250
-
251
- result = convert_now(audio_files, random_tag, converter)
252
- print("Result:", result)
253
-
254
- return result[0]
255
-
256
- def upload_model(index_file, pth_file):
257
- pth_file = pth_file.name
258
- index_file = index_file.name
259
- pth_file_ui.value = pth_file
260
- index_file_ui.value = index_file
261
- return "Uploaded!"
262
-
263
- with gr.Blocks(theme=gr.themes.Default(primary_hue="pink", secondary_hue="rose"), title="Ilaria RVC πŸ’–") as demo:
264
- gr.Markdown("## Ilaria RVC πŸ’–")
265
- with gr.Tab("Inference"):
266
- sound_gui = gr.Audio(value=None, type="filepath", autoplay=False, visible=True)
267
- pth_file_ui = gr.Textbox(label="Model pth file", value=pth_file, visible=False, interactive=False)
268
- index_file_ui = gr.Textbox(label="Index pth file", value=index_file, visible=False, interactive=False)
269
-
270
- with gr.Accordion("Ilaria TTS", open=False):
271
- text_tts = gr.Textbox(label="Text", placeholder="Hello!", lines=3, interactive=True)
272
- dropdown_tts = gr.Dropdown(label="Language and Model", choices=list(language_dict.keys()), interactive=True, value=list(language_dict.keys())[0])
273
-
274
- button_tts = gr.Button("Speak", variant="primary")
275
-
276
- # Rimuovi l'output_tts e usa solo sound_gui come output
277
- button_tts.click(text_to_speech_edge, inputs=[text_tts, dropdown_tts], outputs=sound_gui)
278
-
279
- with gr.Accordion("Settings", open=False):
280
- pitch_algo_conf = gr.Dropdown(PITCH_ALGO_OPT, value=PITCH_ALGO_OPT[4], label="Pitch algorithm", visible=True, interactive=True)
281
- pitch_lvl_conf = gr.Slider(label="Pitch level (lower -> 'male' while higher -> 'female')", minimum=-24, maximum=24, step=1, value=0, visible=True, interactive=True)
282
- index_inf_conf = gr.Slider(minimum=0, maximum=1, label="Index influence -> How much accent is applied", value=0.75)
283
- respiration_filter_conf = gr.Slider(minimum=0, maximum=7, label="Respiration median filtering", value=3, step=1, interactive=True)
284
- envelope_ratio_conf = gr.Slider(minimum=0, maximum=1, label="Envelope ratio", value=0.25, interactive=True)
285
- consonant_protec_conf = gr.Slider(minimum=0, maximum=0.5, label="Consonant breath protection", value=0.5, interactive=True)
286
-
287
- button_conf = gr.Button("Convert", variant="primary")
288
- output_conf = gr.Audio(type="filepath", label="Output")
289
-
290
- button_conf.click(lambda: None, None, output_conf)
291
- button_conf.click(
292
- run,
293
- inputs=[
294
- sound_gui,
295
- pitch_algo_conf,
296
- pitch_lvl_conf,
297
- index_inf_conf,
298
- respiration_filter_conf,
299
- envelope_ratio_conf,
300
- consonant_protec_conf,
301
- ],
302
- outputs=[output_conf],
303
- )
304
-
305
- with gr.Tab("Model Loader (Download and Upload)"):
306
- with gr.Accordion("Model Downloader", open=False):
307
- gr.Markdown(
308
- "Download the model from the following URL and upload it here. (Hugginface RVC model)"
309
- )
310
- model = gr.Textbox(lines=1, label="Model URL")
311
- download_button = gr.Button("Download Model")
312
- status = gr.Textbox(lines=1, label="Status", placeholder="Waiting....", interactive=False)
313
- model_pth = gr.Textbox(lines=1, label="Model pth file", placeholder="Waiting....", interactive=False)
314
- index_pth = gr.Textbox(lines=1, label="Index pth file", placeholder="Waiting....", interactive=False)
315
- download_button.click(download_from_url, model, outputs=[status, model_pth, index_pth])
316
- with gr.Accordion("Upload A Model", open=False):
317
- index_file_upload = gr.File(label="Index File (.index)")
318
- pth_file_upload = gr.File(label="Model File (.pth)")
319
- upload_button = gr.Button("Upload Model")
320
- upload_status = gr.Textbox(lines=1, label="Status", placeholder="Waiting....", interactive=False)
321
-
322
- upload_button.click(upload_model, [index_file_upload, pth_file_upload], upload_status)
323
-
324
- with gr.Tab("Extra"):
325
- with gr.Accordion("Training Time Calculator", open=False):
326
- with gr.Column():
327
- epochs_input = gr.Number(label="Number of Epochs")
328
- seconds_input = gr.Number(label="Seconds per Epoch")
329
- calculate_button = gr.Button("Calculate Time Remaining")
330
- remaining_time_output = gr.Textbox(label="Remaining Time", interactive=False)
331
-
332
- calculate_button.click(
333
- fn=calculate_remaining_time,
334
- inputs=[epochs_input, seconds_input],
335
- outputs=[remaining_time_output]
336
- )
337
-
338
- with gr.Accordion('Training Helper', open=False):
339
- with gr.Column():
340
- audio_input = gr.Audio(type="filepath", label="Upload your audio file")
341
- gr.Text("Please note that these results are approximate and intended to provide a general idea for beginners.", label='Notice:')
342
- training_info_output = gr.Markdown(label="Training Information:")
343
- get_info_button = gr.Button("Get Training Info")
344
- get_info_button.click(
345
- fn=on_button_click,
346
- inputs=[audio_input],
347
- outputs=[training_info_output]
348
- )
349
-
350
- with gr.Tab("Credits"):
351
- gr.Markdown(
352
- """
353
- Ilaria RVC made by [Ilaria](https://huggingface.co/TheStinger) suport her on [ko-fi](https://ko-fi.com/ilariaowo)
354
-
355
- The Inference code is made by [r3gm](https://huggingface.co/r3gm) (his module helped form this space πŸ’–)
356
-
357
- made with ❀️ by [mikus](https://github.com/cappuch) - i make this ui........
358
-
359
- ## In loving memory of JLabDX πŸ•ŠοΈ
360
- """
361
- )
362
-
363
- demo.queue(api_open=False).launch(show_api=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import requests
3
+ import random
4
+ import os
5
+ import zipfile
6
+ import librosa
7
+ import time
8
+ from infer_rvc_python import BaseLoader
9
+ from pydub import AudioSegment
10
+ from tts_voice import tts_order_voice
11
+ import edge_tts
12
+ import tempfile
13
+ from audio_separator.separator import Separator
14
+ import model_handler
15
+ import psutil
16
+ import cpuinfo
17
+
18
+ language_dict = tts_order_voice
19
+
20
+ async def text_to_speech_edge(text, language_code):
21
+ voice = language_dict[language_code]
22
+ communicate = edge_tts.Communicate(text, voice)
23
+ with tempfile.NamedTemporaryFile(delete=False, suffix=".mp3") as tmp_file:
24
+ tmp_path = tmp_file.name
25
+
26
+ await communicate.save(tmp_path)
27
+
28
+ return tmp_path
29
+
30
+ try:
31
+ import spaces
32
+ spaces_status = True
33
+ except ImportError:
34
+ spaces_status = False
35
+
36
+ separator = Separator()
37
+ converter = BaseLoader(only_cpu=False, hubert_path=None, rmvpe_path=None) # <- yeah so like this handles rvc
38
+
39
+ global pth_file
40
+ global index_file
41
+
42
+ pth_file = "model.pth"
43
+ index_file = "model.index"
44
+
45
+ #CONFIGS
46
+ TEMP_DIR = "temp"
47
+ MODEL_PREFIX = "model"
48
+ PITCH_ALGO_OPT = [
49
+ "pm",
50
+ "harvest",
51
+ "crepe",
52
+ "rmvpe",
53
+ "rmvpe+",
54
+ ]
55
+ UVR_5_MODELS = [
56
+ {"model_name": "BS-Roformer-Viperx-1297", "checkpoint": "model_bs_roformer_ep_317_sdr_12.9755.ckpt"},
57
+ {"model_name": "MDX23C-InstVoc HQ 2", "checkpoint": "MDX23C-8KFFT-InstVoc_HQ_2.ckpt"},
58
+ {"model_name": "Kim Vocal 2", "checkpoint": "Kim_Vocal_2.onnx"},
59
+ {"model_name": "5_HP-Karaoke", "checkpoint": "5_HP-Karaoke-UVR.pth"},
60
+ {"model_name": "UVR-DeNoise by FoxJoy", "checkpoint": "UVR-DeNoise.pth"},
61
+ {"model_name": "UVR-DeEcho-DeReverb by FoxJoy", "checkpoint": "UVR-DeEcho-DeReverb.pth"},
62
+ ]
63
+ MODELS = [
64
+ {"model": "model.pth", "index": "model.index", "model_name": "Test Model"},
65
+ ]
66
+
67
+ os.makedirs(TEMP_DIR, exist_ok=True)
68
+
69
+ def unzip_file(file):
70
+ filename = os.path.basename(file).split(".")[0]
71
+ with zipfile.ZipFile(file, 'r') as zip_ref:
72
+ zip_ref.extractall(os.path.join(TEMP_DIR, filename))
73
+ return True
74
+
75
+
76
+ def progress_bar(total, current):
77
+ return "[" + "=" * int(current / total * 20) + ">" + " " * (20 - int(current / total * 20)) + "] " + str(int(current / total * 100)) + "%"
78
+
79
+ def download_from_url(url, name=None):
80
+ if name is None:
81
+ raise ValueError("The model name must be provided")
82
+ if "/blob/" in url:
83
+ url = url.replace("/blob/", "/resolve/")
84
+ if "huggingface" not in url:
85
+ return ["The URL must be from huggingface", "Failed", "Failed"]
86
+ filename = os.path.join(TEMP_DIR, MODEL_PREFIX + str(random.randint(1, 1000)) + ".zip")
87
+ response = requests.get(url)
88
+ total = int(response.headers.get('content-length', 0))
89
+ if total > 500000000:
90
+
91
+ return ["The file is too large. You can only download files up to 500 MB in size.", "Failed", "Failed"]
92
+ current = 0
93
+ with open(filename, "wb") as f:
94
+ for data in response.iter_content(chunk_size=4096):
95
+ f.write(data)
96
+ current += len(data)
97
+ print(progress_bar(total, current), end="\r") #
98
+
99
+
100
+
101
+ try:
102
+ unzip_file(filename)
103
+ except Exception as e:
104
+ return ["Failed to unzip the file", "Failed", "Failed"]
105
+ unzipped_dir = os.path.join(TEMP_DIR, os.path.basename(filename).split(".")[0])
106
+ pth_files = []
107
+ index_files = []
108
+ for root, dirs, files in os.walk(unzipped_dir):
109
+ for file in files:
110
+ if file.endswith(".pth"):
111
+ pth_files.append(os.path.join(root, file))
112
+ elif file.endswith(".index"):
113
+ index_files.append(os.path.join(root, file))
114
+
115
+ print(pth_files, index_files)
116
+ global pth_file
117
+ global index_file
118
+ pth_file = pth_files[0]
119
+ index_file = index_files[0]
120
+
121
+ print(pth_file)
122
+ print(index_file)
123
+
124
+ MODELS.append({"model": pth_file, "index": index_file, "model_name": name})
125
+ return ["Downloaded as " + name, pth_files[0], index_files[0]]
126
+
127
+ def inference(audio, model_name):
128
+ output_data = inf_handler(audio, model_name)
129
+ vocals = output_data[0]
130
+ inst = output_data[1]
131
+
132
+ return vocals, inst
133
+
134
+ if spaces_status:
135
+ @spaces.GPU()
136
+ def convert_now(audio_files, random_tag, converter):
137
+ return converter(
138
+ audio_files,
139
+ random_tag,
140
+ overwrite=False,
141
+ parallel_workers=8
142
+ )
143
+
144
+
145
+ else:
146
+ def convert_now(audio_files, random_tag, converter):
147
+ return converter(
148
+ audio_files,
149
+ random_tag,
150
+ overwrite=False,
151
+ parallel_workers=8
152
+ )
153
+
154
+ def calculate_remaining_time(epochs, seconds_per_epoch):
155
+ total_seconds = epochs * seconds_per_epoch
156
+
157
+ hours = total_seconds // 3600
158
+ minutes = (total_seconds % 3600) // 60
159
+ seconds = total_seconds % 60
160
+
161
+ if hours == 0:
162
+ return f"{int(minutes)} minutes"
163
+ elif hours == 1:
164
+ return f"{int(hours)} hour and {int(minutes)} minutes"
165
+ else:
166
+ return f"{int(hours)} hours and {int(minutes)} minutes"
167
+
168
+ def inf_handler(audio, model_name):
169
+ model_found = False
170
+ for model_info in UVR_5_MODELS:
171
+ if model_info["model_name"] == model_name:
172
+ separator.load_model(model_info["checkpoint"])
173
+ model_found = True
174
+ break
175
+ if not model_found:
176
+ separator.load_model()
177
+ output_files = separator.separate(audio)
178
+ vocals = output_files[0]
179
+ inst = output_files[1]
180
+ return vocals, inst
181
+
182
+
183
+ def run(
184
+ model,
185
+ audio_files,
186
+ pitch_alg,
187
+ pitch_lvl,
188
+ index_inf,
189
+ r_m_f,
190
+ e_r,
191
+ c_b_p,
192
+ ):
193
+ if not audio_files:
194
+ raise ValueError("The audio pls")
195
+
196
+ if isinstance(audio_files, str):
197
+ audio_files = [audio_files]
198
+
199
+ try:
200
+ duration_base = librosa.get_duration(filename=audio_files[0])
201
+ print("Duration:", duration_base)
202
+ except Exception as e:
203
+ print(e)
204
+
205
+ random_tag = "USER_"+str(random.randint(10000000, 99999999))
206
+
207
+ file_m = model
208
+ print("File model:", file_m)
209
+
210
+ # get from MODELS
211
+ for model in MODELS:
212
+ if model["model_name"] == file_m:
213
+ print(model)
214
+ file_m = model["model"]
215
+ file_index = model["index"]
216
+ break
217
+
218
+ if not file_m.endswith(".pth"):
219
+ raise ValueError("The model file must be a .pth file")
220
+
221
+
222
+ print("Random tag:", random_tag)
223
+ print("File model:", file_m)
224
+ print("Pitch algorithm:", pitch_alg)
225
+ print("Pitch level:", pitch_lvl)
226
+ print("File index:", file_index)
227
+ print("Index influence:", index_inf)
228
+ print("Respiration median filtering:", r_m_f)
229
+ print("Envelope ratio:", e_r)
230
+
231
+ converter.apply_conf(
232
+ tag=random_tag,
233
+ file_model=file_m,
234
+ pitch_algo=pitch_alg,
235
+ pitch_lvl=pitch_lvl,
236
+ file_index=file_index,
237
+ index_influence=index_inf,
238
+ respiration_median_filtering=r_m_f,
239
+ envelope_ratio=e_r,
240
+ consonant_breath_protection=c_b_p,
241
+ resample_sr=44100 if audio_files[0].endswith('.mp3') else 0,
242
+ )
243
+ time.sleep(0.1)
244
+
245
+ result = convert_now(audio_files, random_tag, converter)
246
+ print("Result:", result)
247
+
248
+ return result[0]
249
+
250
+ def upload_model(index_file, pth_file, model_name):
251
+ pth_file = pth_file.name
252
+ index_file = index_file.name
253
+ MODELS.append({"model": pth_file, "index": index_file, "model_name": model_name})
254
+ return "Uploaded!"
255
+
256
+ with gr.Blocks(theme=gr.themes.Default(primary_hue="pink", secondary_hue="rose"), title="Ilaria RVC πŸ’–") as demo:
257
+ gr.Markdown("## Ilaria RVC πŸ’–")
258
+ with gr.Tab("Inference"):
259
+ sound_gui = gr.Audio(value=None,type="filepath",autoplay=False,visible=True,)
260
+ def update():
261
+ print(MODELS)
262
+ return gr.Dropdown(label="Model",choices=[model["model_name"] for model in MODELS],visible=True,interactive=True, value=MODELS[0]["model_name"],)
263
+ with gr.Row():
264
+ models_dropdown = gr.Dropdown(label="Model",choices=[model["model_name"] for model in MODELS],visible=True,interactive=True, value=MODELS[0]["model_name"],)
265
+ refresh_button = gr.Button("Refresh Models")
266
+ refresh_button.click(update, outputs=[models_dropdown])
267
+
268
+ with gr.Accordion("Ilaria TTS", open=False):
269
+ text_tts = gr.Textbox(label="Text", placeholder="Hello!", lines=3, interactive=True,)
270
+ dropdown_tts = gr.Dropdown(label="Language and Model",choices=list(language_dict.keys()),interactive=True, value=list(language_dict.keys())[0])
271
+
272
+ button_tts = gr.Button("Speak", variant="primary",)
273
+ button_tts.click(text_to_speech_edge, inputs=[text_tts, dropdown_tts], outputs=[sound_gui])
274
+
275
+ with gr.Accordion("Settings", open=False):
276
+ pitch_algo_conf = gr.Dropdown(PITCH_ALGO_OPT,value=PITCH_ALGO_OPT[4],label="Pitch algorithm",visible=True,interactive=True,)
277
+ pitch_lvl_conf = gr.Slider(label="Pitch level (lower -> 'male' while higher -> 'female')",minimum=-24,maximum=24,step=1,value=0,visible=True,interactive=True,)
278
+ index_inf_conf = gr.Slider(minimum=0,maximum=1,label="Index influence -> How much accent is applied",value=0.75,)
279
+ respiration_filter_conf = gr.Slider(minimum=0,maximum=7,label="Respiration median filtering",value=3,step=1,interactive=True,)
280
+ envelope_ratio_conf = gr.Slider(minimum=0,maximum=1,label="Envelope ratio",value=0.25,interactive=True,)
281
+ consonant_protec_conf = gr.Slider(minimum=0,maximum=0.5,label="Consonant breath protection",value=0.5,interactive=True,)
282
+
283
+ button_conf = gr.Button("Convert",variant="primary",)
284
+ output_conf = gr.Audio(type="filepath",label="Output",)
285
+
286
+ button_conf.click(lambda :None, None, output_conf)
287
+ button_conf.click(
288
+ run,
289
+ inputs=[
290
+ models_dropdown,
291
+ sound_gui,
292
+ pitch_algo_conf,
293
+ pitch_lvl_conf,
294
+ index_inf_conf,
295
+ respiration_filter_conf,
296
+ envelope_ratio_conf,
297
+ consonant_protec_conf,
298
+ ],
299
+ outputs=[output_conf],
300
+ )
301
+
302
+
303
+ with gr.Tab("Model Loader (Download and Upload)"):
304
+ with gr.Accordion("Model Downloader", open=False):
305
+ gr.Markdown(
306
+ "Download the model from the following URL and upload it here. (Huggingface RVC model)"
307
+ )
308
+ model = gr.Textbox(lines=1, label="Model URL")
309
+ name = gr.Textbox(lines=1, label="Model Name", placeholder="Model Name")
310
+ download_button = gr.Button("Download Model")
311
+ status = gr.Textbox(lines=1, label="Status", placeholder="Waiting....", interactive=False)
312
+ model_pth = gr.Textbox(lines=1, label="Model pth file", placeholder="Waiting....", interactive=False)
313
+ index_pth = gr.Textbox(lines=1, label="Index pth file", placeholder="Waiting....", interactive=False)
314
+ download_button.click(download_from_url, [model, name], outputs=[status, model_pth, index_pth])
315
+ with gr.Accordion("Upload A Model", open=False):
316
+ index_file_upload = gr.File(label="Index File (.index)")
317
+ pth_file_upload = gr.File(label="Model File (.pth)")
318
+
319
+ model_name = gr.Textbox(label="Model Name", placeholder="Model Name")
320
+ upload_button = gr.Button("Upload Model")
321
+ upload_status = gr.Textbox(lines=1, label="Status", placeholder="Waiting....", interactive=False)
322
+
323
+ upload_button.click(upload_model, [index_file_upload, pth_file_upload, model_name], upload_status)
324
+
325
+
326
+ with gr.Tab("Vocal Separator (UVR)"):
327
+ gr.Markdown("Separate vocals and instruments from an audio file using UVR models. - This is only on CPU due to ZeroGPU being ZeroGPU :(")
328
+ uvr5_audio_file = gr.Audio(label="Audio File",type="filepath")
329
+
330
+ with gr.Row():
331
+ uvr5_model = gr.Dropdown(label="Model", choices=[model["model_name"] for model in UVR_5_MODELS])
332
+ uvr5_button = gr.Button("Separate Vocals", variant="primary",)
333
+
334
+ uvr5_output_voc = gr.Audio(type="filepath", label="Output 1",)
335
+ uvr5_output_inst = gr.Audio(type="filepath", label="Output 2",)
336
+
337
+ uvr5_button.click(inference, [uvr5_audio_file, uvr5_model], [uvr5_output_voc, uvr5_output_inst])
338
+
339
+ with gr.Tab("Extra"):
340
+ with gr.Accordion("Model Information", open=False):
341
+ def json_to_markdown_table(json_data):
342
+ table = "| Key | Value |\n| --- | --- |\n"
343
+ for key, value in json_data.items():
344
+ table += f"| {key} | {value} |\n"
345
+ return table
346
+ def model_info(name):
347
+ for model in MODELS:
348
+ if model["model_name"] == name:
349
+ print(model["model"])
350
+ info = model_handler.model_info(model["model"])
351
+ info2 = {
352
+ "Model Name": model["model_name"],
353
+ "Model Config": info['config'],
354
+ "Epochs Trained": info['epochs'],
355
+ "Sample Rate": info['sr'],
356
+ "Pitch Guidance": info['f0'],
357
+ "Model Precision": info['size'],
358
+ }
359
+ return gr.Markdown(json_to_markdown_table(info2))
360
+
361
+ return "Model not found"
362
+ def update():
363
+ print(MODELS)
364
+ return gr.Dropdown(label="Model", choices=[model["model_name"] for model in MODELS])
365
+ with gr.Row():
366
+ model_info_dropdown = gr.Dropdown(label="Model", choices=[model["model_name"] for model in MODELS])
367
+ refresh_button = gr.Button("Refresh Models")
368
+ refresh_button.click(update, outputs=[model_info_dropdown])
369
+ model_info_button = gr.Button("Get Model Information")
370
+ model_info_output = gr.Textbox(value="Waiting...",label="Output", interactive=False)
371
+ model_info_button.click(model_info, [model_info_dropdown], [model_info_output])
372
+
373
+
374
+
375
+ with gr.Accordion("Training Time Calculator", open=False):
376
+ with gr.Column():
377
+ epochs_input = gr.Number(label="Number of Epochs")
378
+ seconds_input = gr.Number(label="Seconds per Epoch")
379
+ calculate_button = gr.Button("Calculate Time Remaining")
380
+ remaining_time_output = gr.Textbox(label="Remaining Time", interactive=False)
381
+
382
+ calculate_button.click(calculate_remaining_time,inputs=[epochs_input, seconds_input],outputs=[remaining_time_output])
383
+
384
+ with gr.Accordion("Model Fusion", open=False):
385
+ with gr.Group():
386
+ def merge(ckpt_a, ckpt_b, alpha_a, sr_, if_f0_, info__, name_to_save0, version_2):
387
+ for model in MODELS:
388
+ if model["model_name"] == ckpt_a:
389
+ ckpt_a = model["model"]
390
+ if model["model_name"] == ckpt_b:
391
+ ckpt_b = model["model"]
392
+
393
+ path = model_handler.merge(ckpt_a, ckpt_b, alpha_a, sr_, if_f0_, info__, name_to_save0, version_2)
394
+ if path == "Fail to merge the models. The model architectures are not the same.":
395
+ return "Fail to merge the models. The model architectures are not the same."
396
+ else:
397
+ MODELS.append({"model": path, "index": None, "model_name": name_to_save0})
398
+ return "Merged, saved as " + name_to_save0
399
+
400
+ gr.Markdown(value="Strongly suggested to use only very clean models.")
401
+ with gr.Row():
402
+ def update():
403
+ print(MODELS)
404
+ return gr.Dropdown(label="Model A", choices=[model["model_name"] for model in MODELS]), gr.Dropdown(label="Model B", choices=[model["model_name"] for model in MODELS])
405
+ refresh_button_fusion = gr.Button("Refresh Models")
406
+ ckpt_a = gr.Dropdown(label="Model A", choices=[model["model_name"] for model in MODELS])
407
+ ckpt_b = gr.Dropdown(label="Model B", choices=[model["model_name"] for model in MODELS])
408
+ refresh_button_fusion.click(update, outputs=[ckpt_a, ckpt_b])
409
+ alpha_a = gr.Slider(
410
+ minimum=0,
411
+ maximum=1,
412
+ label="Weight of the first model over the second",
413
+ value=0.5,
414
+ interactive=True,
415
+ )
416
+ with gr.Group():
417
+ with gr.Row():
418
+ sr_ = gr.Radio(
419
+ label="Sample rate of both models",
420
+ choices=["32k","40k", "48k"],
421
+ value="32k",
422
+ interactive=True,
423
+ )
424
+ if_f0_ = gr.Radio(
425
+ label="Pitch Guidance",
426
+ choices=["Yes", "Nah"],
427
+ value="Yes",
428
+ interactive=True,
429
+ )
430
+ info__ = gr.Textbox(
431
+ label="Add informations to the model",
432
+ value="",
433
+ max_lines=8,
434
+ interactive=True,
435
+ visible=False
436
+ )
437
+ name_to_save0 = gr.Textbox(
438
+ label="Final Model name",
439
+ value="",
440
+ max_lines=1,
441
+ interactive=True,
442
+ )
443
+ version_2 = gr.Radio(
444
+ label="Versions of the models",
445
+ choices=["v1", "v2"],
446
+ value="v2",
447
+ interactive=True,
448
+ )
449
+ with gr.Group():
450
+ with gr.Row():
451
+ but6 = gr.Button("Fuse the two models", variant="primary")
452
+ info4 = gr.Textbox(label="Output", value="", max_lines=8)
453
+ but6.click(
454
+ merge,
455
+ [ckpt_a,ckpt_b,alpha_a,sr_,if_f0_,info__,name_to_save0,version_2,],info4,api_name="ckpt_merge",)
456
+
457
+ with gr.Accordion("Model Quantization", open=False):
458
+ gr.Markdown("Quantize the model to a lower precision. - soonβ„’ or neverβ„’ 😎")
459
+
460
+ with gr.Accordion("Debug", open=False):
461
+ def json_to_markdown_table(json_data):
462
+ table = "| Key | Value |\n| --- | --- |\n"
463
+ for key, value in json_data.items():
464
+ table += f"| {key} | {value} |\n"
465
+ return table
466
+ gr.Markdown("View the models that are currently loaded in the instance.")
467
+
468
+ gr.Markdown(json_to_markdown_table({"Models": len(MODELS), "UVR Models": len(UVR_5_MODELS)}))
469
+
470
+ gr.Markdown("View the current status of the instance.")
471
+ status = {
472
+ "Status": "Running", # duh lol
473
+ "Models": len(MODELS),
474
+ "UVR Models": len(UVR_5_MODELS),
475
+ "CPU Usage": f"{psutil.cpu_percent()}%",
476
+ "RAM Usage": f"{psutil.virtual_memory().percent}%",
477
+ "CPU": f"{cpuinfo.get_cpu_info()['brand_raw']}",
478
+ "System Uptime": f"{round(time.time() - psutil.boot_time(), 2)} seconds",
479
+ "System Load Average": f"{psutil.getloadavg()}",
480
+ "====================": "====================",
481
+ "CPU Cores": psutil.cpu_count(),
482
+ "CPU Threads": psutil.cpu_count(logical=True),
483
+ "RAM Total": f"{round(psutil.virtual_memory().total / 1024**3, 2)} GB",
484
+ "RAM Used": f"{round(psutil.virtual_memory().used / 1024**3, 2)} GB",
485
+ "CPU Frequency": f"{psutil.cpu_freq().current} MHz",
486
+ "====================": "====================",
487
+ "GPU": "A100 - Do a request (Inference, you won't see it either way)",
488
+ }
489
+ gr.Markdown(json_to_markdown_table(status))
490
+
491
+ with gr.Tab("Credits"):
492
+ gr.Markdown(
493
+ """
494
+ Ilaria RVC made by [Ilaria](https://huggingface.co/TheStinger) suport her on [ko-fi](https://ko-fi.com/ilariaowo)
495
+
496
+ The Inference code is made by [r3gm](https://huggingface.co/r3gm) (his module helped form this space πŸ’–)
497
+
498
+ made with ❀️ by [mikus](https://github.com/cappuch) - made the ui!
499
+
500
+ ## In loving memory of JLabDX πŸ•ŠοΈ
501
+ """
502
+ )
503
+
504
+ demo.queue(api_open=False).launch(show_api=False) # idk ilaria if you want or dont want to
model_handler.py ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import numpy as np
3
+ import huggingface_hub
4
+ import zipfile
5
+ import os
6
+ from collections import OrderedDict
7
+
8
+ def model_info(model_path):
9
+ model = torch.load(model_path, map_location=torch.device('cpu'))
10
+ info = {
11
+ 'config': model['config'],
12
+ 'info': model['info'],
13
+ 'epochs': model['info'].split('epoch')[0],
14
+ 'sr': model['sr'],
15
+ 'f0': model['f0'],
16
+ 'size': model['size'] if 'size' in model['weight'] else 'fp32',
17
+ }
18
+ return info
19
+
20
+ def merge(path1, path2, alpha1, sr, f0, info, name, version):
21
+ try:
22
+ def extract(ckpt):
23
+ a = ckpt["model"]
24
+ opt = OrderedDict()
25
+ opt["weight"] = {}
26
+ for key in a.keys():
27
+ if "enc_q" in key:
28
+ continue
29
+ opt["weight"][key] = a[key]
30
+ return opt
31
+
32
+ ckpt1 = torch.load(path1, map_location="cpu")
33
+ ckpt2 = torch.load(path2, map_location="cpu")
34
+ cfg = ckpt1["config"]
35
+ if "model" in ckpt1:
36
+ ckpt1 = extract(ckpt1)
37
+ else:
38
+ ckpt1 = ckpt1["weight"]
39
+ if "model" in ckpt2:
40
+ ckpt2 = extract(ckpt2)
41
+ else:
42
+ ckpt2 = ckpt2["weight"]
43
+ if sorted(list(ckpt1.keys())) != sorted(list(ckpt2.keys())):
44
+ return "Fail to merge the models. The model architectures are not the same."
45
+ opt = OrderedDict()
46
+ opt["weight"] = {}
47
+ for key in ckpt1.keys():
48
+ # try:
49
+ if key == "emb_g.weight" and ckpt1[key].shape != ckpt2[key].shape:
50
+ min_shape0 = min(ckpt1[key].shape[0], ckpt2[key].shape[0])
51
+ opt["weight"][key] = (
52
+ alpha1 * (ckpt1[key][:min_shape0].float())
53
+ + (1 - alpha1) * (ckpt2[key][:min_shape0].float())
54
+ ).half()
55
+ else:
56
+ opt["weight"][key] = (
57
+ alpha1 * (ckpt1[key].float()) + (1 - alpha1) * (ckpt2[key].float())
58
+ ).half()
59
+ # except:
60
+ # pdb.set_trace()
61
+ opt["config"] = cfg
62
+ """
63
+ if(sr=="40k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 10, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 40000]
64
+ elif(sr=="48k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,6,2,2,2], 512, [16, 16, 4, 4], 109, 256, 48000]
65
+ elif(sr=="32k"):opt["config"] = [513, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 4, 2, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 32000]
66
+ """
67
+ opt["sr"] = sr
68
+ opt["f0"] = 1 if f0 == "Yes" else 0
69
+ opt["version"] = version
70
+ opt["info"] = info
71
+ torch.save(opt, "models/" + name + ".pth")
72
+ return "models/" + name + ".pth"
73
+ except:
74
+ return "Fail to merge the models. The model architectures are not the same." # <- L if u see this u suck
75
+
76
+ def model_quant(model_path, size):
77
+ """
78
+ Quantize the model to a lower precision. - this is the floating point version
79
+
80
+ Args:
81
+ model_path: str, path to the model file
82
+ size: str, one of ["fp2", "fp4", "fp8", "fp16"]
83
+
84
+ Returns:
85
+ str, message indicating the success of the operation
86
+ """
87
+ size_options = ["fp2", "fp4", "fp8", "fp16"]
88
+ if size not in size_options:
89
+ raise ValueError(f"Size must be one of {size_options}")
90
+
91
+ model_base = torch.load(model_path, map_location=torch.device('cpu'))
92
+ model = model_base['weight']
93
+ #model = json.loads(json.dumps(model))
94
+
95
+ if size == "fp16":
96
+ for key in model.keys():
97
+ model[key] = model[key].half() # 16-bit floating point
98
+ elif size == "fp8":
99
+ for key in model.keys():
100
+ model[key] = model[key].half().half() # 8-bit floating point <- this is the most common one
101
+ elif size == "fp4":
102
+ for key in model.keys():
103
+ model[key] = model[key].half().half().half() # 4-bit floating point <- ok maybe you're mentally ill if you choose this (very low precision)
104
+ elif size == "fp2":
105
+ for key in model.keys():
106
+ model[key] = model[key].half().half().half().half() # 2-bit floating point <- if you choose this you're a fucking dickhead coming
107
+
108
+ print(model_path)
109
+ output_path = model_path.split('.pth')[0] + f'_{size}.pth'
110
+ output_style = {
111
+ 'weight': model,
112
+ 'config': model_base['config'],
113
+ 'info': model_base['info'],
114
+ 'sr': model_base['sr'],
115
+ 'f0': model_base['f0'],
116
+ 'credits': f"Quantized to {size} precision, using Ilaria RVC, (Mikus's script)",
117
+ "size": size
118
+ }
119
+ torch.save(output_style, output_path)
120
+
121
+ #AmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithraxAmerithrax
122
+ # our data isnt safe anymore currently typing this and there is a 100% chance that it'll be stolen and used for training another fucking dogshit language model by a horrible company like openai
123
+ # i say this as a person who communicates with microsoft and i will stop mentioning this as they're so closely tied together nowadays
124
+ # as fred durst has said - "That's your best friend and your worst enemy - your own brain." - keep your shit local and never trust scumbag companies even if they make the models oss - they're stealing data
125
+ # this is probably the only rant i'll have in this entire space and i put it in a notable spot
126
+
127
+ return "Model quantized successfully" # <- enjoy this fucking hot shit that looks like a steaming turd paired with skibidi toilet and the unibomber
128
+
129
+ def upload_model(repo, pth, index, token):
130
+ """
131
+ Upload a model to the Hugging Face Hub
132
+
133
+ Args:
134
+ repo: str, the name of the repository
135
+ pth: str, path to the model file
136
+ index: str, the index of the model in the repository
137
+ token: str, the API token
138
+
139
+ Returns:
140
+ str, message indicating the success of the operation
141
+ """
142
+ readme = f"""
143
+ # {repo}
144
+ This is a model uploaded by Ilaria RVC, using Mikus's script.
145
+ """
146
+ repo_name = repo.split('/')[1]
147
+ with zipfile.ZipFile(f'{repo_name}.zip', 'w') as zipf:
148
+ zipf.write(pth, os.path.basename(pth))
149
+ zipf.write(index, os.path.basename(index))
150
+ zipf.writestr('README.md', readme)
151
+
152
+ huggingface_hub.HfApi().create_repo(token=token, name=repo, exist_ok=True)
153
+ huggingface_hub.HfApi().upload_file(token=token, path=f'{repo.split("/")[1]}.zip', repo_id=repo)
154
+ os.remove(f'{repo.split("/")[1]}.zip')
155
+ return "Model uploaded successfully"
requirements.txt CHANGED
@@ -8,3 +8,6 @@ audio-separator[gpu]
8
  scipy
9
  onnxruntime-gpu
10
  samplerate
 
 
 
 
8
  scipy
9
  onnxruntime-gpu
10
  samplerate
11
+ transformers
12
+ psutil
13
+ py-cpuinfo