Spaces:
Running
on
Zero
Why don't you state at the outset you're planning on *charging* people
Allowing them to say no thanks without wasting time?
wait this costs money?
wait this costs money?
yeah, its kinda hard to rewrite to full local mode, Im trying but not succeed.
It's duplicating space and HF charging you for GPU rent, thats why you need token
damn they should say that up front
wait this costs money?
yeah, its kinda hard to rewrite to full local mode, Im trying but not succeed.
It's duplicating space and HF charging you for GPU rent, thats why you need token
It would be great if a full local version could be made available so we can leverage our GPU...!
It would be great if a full local version could be made available so we can leverage our GPU...!
agree, source code have function start_training_og which seems like that should start training locally, but it hangs out with no progress and GPU compute usage is 0% however vram at 8.5Gb
BLIP is working fine
Bon-Fire: fine , but you could say it more straightforwardly than "for cheap".
Like
"Please note before you take time uploading, there will be a paywall and this will include needing to do extra steps to authenticate or make an account in case you are surfing by in a logged-out state."
Which is very frequent, because it's possible to have fun with the chats and image models in a logged-out, anonymous state.
I tried to run it locally and it was impossible. Some dependencies missing. It would be nice to have the proper and reproducible steps to running locally, some of us have proper 4090 GPUs and willing to try this tool but local.
this is basically a scam, when they dont inform at the start that this cost money
I managed to run this fully locally, on my own GPU, but its 3AM here, so later on ready to use solution
Details for impatient ones and who can get hands in code:
app.py
have functionstart_training_og
to call it we had change it fromstart_training
around 982 linestart_training_og
accepts less arguments, so you have to comment# use_prodigy_beta3,
# use_adam_weight_decay_text_encoder,
and# token
- Than we have to
delete visible=False
around line 887, it should bestart = gr.Button("Start training", interactive=True)
- Now head to
start_training_og
function and findfrom train_dreambooth_lora_sdxl_advanced import main as train_main, parse_args as parse_train_args
at the function end (around 478 line) - IDK why, but it runs only with subrocess call, without that it hangs with RAM consumption
- to call it i have not great but working piece of code:
import subprocess
opts = []
for a in commands:
argval=a.split("=")
opts.append(argval[0])
if len(argval)>1:
opts.append(argval[1])
# args = parse_train_args(shlex.split(argst))
# print(args)
# train_main(args)
subprocess.call(["python", "./train_dreambooth_lora_sdxl_advanced.py", *opts])
If you have troubles with dependencies you may head to start_training
(yes, not _og) function and it have requirements variable which can be used manually
Tested on RTX3090 with huge VRAM spikes, but no OOM
seeking for consult on HF spaces/source license
I managed to run this fully locally, on my own GPU, but its 3AM here, so later on ready to use solution
Details for impatient ones and who can get hands in code:
app.py
have functionstart_training_og
to call it we had change it around 982 line- Function accepts less args, so you have to comment
# use_prodigy_beta3,
# use_adam_weight_decay_text_encoder,
# token
- Than we have to delete visible=False around line 887, it should be
start = gr.Button("Start training", interactive=True)
- Now head to
start_training_og
function and findfrom train_dreambooth_lora_sdxl_advanced import main as train_main, parse_args as parse_train_args
at the function end (around 478 line)- IDK why, but it runs only with subrocess call, without that it hangs with RAM consumption
- to call it i have not great but working piece of code:
import subprocess opts = [] for a in commands: argval=a.split("=") opts.append(argval[0]) if len(argval)>1: opts.append(argval[1]) # args = parse_train_args(shlex.split(argst)) # print(args) # train_main(args) subprocess.call(["python", "./train_dreambooth_lora_sdxl_advanced.py", *opts])
If you have troubles with dependencies you may head to
start_training
(yes, not _og) function and it have requirements variable which can be used manually
Tested on RTX3090 with huge VRAM spikes, but no OOMseeking for consult on HF spaces/source license
How much VRAM does it use? Are you using Linux or Windows?
How much VRAM does it use? Are you using Linux or Windows?
Linux.
VRAM usage heavily depends on advanced settings. Currently I'm running Prodigy optimizer but you can use AdamW 8bit which should consume less. (I succesfully fit in 12Gb VRAM when had RTX3060 with other lora training script).
Barelly touching advanced settings it consumes 18.539Gi, but im almost sure it can be lowered
Hello? Earth to bon-fire?
It's easy for you to change the static text on your interface so that you are stating more prominently and clearly than saying "for cheap" that there is going to be a charge at the end and the need to authenticate.
So why don't you do it? Just improve some phrasing. Nothing to do with software.
The issue is not (a) the reason why you are forced to pass on a charge and the issue is not (b) whether it can be run locally. The issue is simply stating more prominently and more clearly, for people who stick with the website, that there is going to be a charge, so that the user knows this before they take the time to upload images.
Traceback (most recent call last):
File "/content/lora-ease/app.py", line 18, in
import spaces
File "/usr/local/lib/python3.10/dist-packages/spaces/init.py", line 10, in
from .zero.decorator import GPU
File "/usr/local/lib/python3.10/dist-packages/spaces/zero/decorator.py", line 17, in
from . import client
File "/usr/local/lib/python3.10/dist-packages/spaces/zero/client.py", line 17, in
from .gradio import get_event
File "/usr/local/lib/python3.10/dist-packages/spaces/zero/gradio.py", line 8, in
from gradio.context import LocalContext
ImportError: cannot import name 'LocalContext' from 'gradio.context' (/usr/local/lib/python3.10/dist-packages/gradio/context.py)
help
Hello? Earth to bon-fire?
It's easy for you to change the static text on your interface so that you are stating more prominently and clearly than saying "for cheap" that there is going to be a charge at the end and the need to authenticate.So why don't you do it? Just improve some phrasing. Nothing to do with software.
@whichever
Bruh im not repo owner
Not a member of multimodalart and not developed this space
I'm upset just like you but true repo owners keep possibility to run locally in source code, so im just fix that on my side
ImportError: cannot import name 'LocalContext' from 'gradio.context' (/usr/local/lib/python3.10/dist-packages/gradio/context.py)
try to install latest gradio pip instal gradio --upgrade
i have gradio==4.12.0
gradio_client==0.8.0
I'm a bit confused. Before you train the model, it gives me an estimated cost in time and money for the current run. Why are people saying this is a scam or that they don't tell you it costs money? Does the message of the cost and time not show up for everyone?
Did you actually read what people are saying in this thread?
They don't tell AT THE START. They tell that just when you have already filled the information and uploaded your images. It's a known scam tactic.
What's the scam though? I'm curious because I'm currently using this, and if I'm getting scammed it'd be great to know.
Nobody is claiming that it doesn't function. It's simply a matter of them failing to inform you UPFRONT that it has a charge. You've already invested time into trying it out prior to being informed of the fee. I believe this constitutes a bait-and-switch scheme.
I give up. I made this thread about something narrow. The ideal people to come to their threads and write on them would be multimodalart, but they aren't here. Instead, some fool who uses chan-board sneers ("bruh") thought it would be a really great place to talk over and over about an unrelated sidetrack.
Thank you to the people who get it.
@Bon-Fire
Thanks for the snippet. I am trying to implement it myself, but keep running into the same issue. Looks like there is some sort of DF creation error during the run? I am trying to troubleshoot it today.
Figured i would share in case anyone had run into this & come up with a solution. If i do, i will update this comment.
Generating train split: 3 examples [00:00, 84.55 examples/s]
Traceback (most recent call last):
File "/notebooks/lora-ease/./train_dreambooth_lora_sdxl_advanced.py", line 2104, in <module>
main(args)
File "/notebooks/lora-ease/./train_dreambooth_lora_sdxl_advanced.py", line 1491, in main
train_dataset = DreamBoothDataset(
File "/notebooks/lora-ease/./train_dreambooth_lora_sdxl_advanced.py", line 883, in __init__
raise ValueError(
ValueError: `--caption_column` value 'prompt' not found in dataset columns. Dataset columns are: image
Figured i would share in case anyone had run into this & come up with a solution. If i do, i will update this comment.
I wanted to share myself but having tough times rn, invite me when you share
ValueError:
--caption_column
value 'prompt' not found in dataset columns. Dataset columns are: image
this is strange, i had problems with instance_prompt which can be solved by enclosing concept_sentence in single quotes like this f"--instance_prompt='{concept_sentence}'",
for mentioned value i have just "--caption_column=prompt",
perhaps you have {{prompt}}
?
Traceback (most recent call last):
File "d:\Lora-ease app\app.py", line 24, in
subprocess.run(['wget', '-N', training_script_url])
File "c:\Users\marlo\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 503, in run
with Popen(*popenargs, **kwargs) as process:
File "c:\Users\marlo\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 971, in init
self._execute_child(args, executable, preexec_fn, close_fds,
File "c:\Users\marlo\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1456, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] 系统找不到指定的文件。
from train_dreambooth_lora_sdxl_advanced import main as train_main, parse_args as parse_train_args # type: ignore
args = parse_train_args(commands)
train_main(args)
return "ok!"
import subprocess
opts = []
for a in commands:
argval=a.split("=")
opts.append(argval[0])
if len(argval)>1:
opts.append(argval[1])
args = parse_train_args(shlex.split(argst))
print(args)
train_main(args)
subprocess.call(["python", "D:\requirements\huggingface\diffusers\examples\advanced_diffusion_training\train_dreambooth_lora_sdxl_advanced.py", *opts])
@Bon-Fire
I am a beginner in coding. I tried to follow your method step by step, but still couldn't get it to work. I have had a hard time finding a solution through search engines. If you come across this issue and have some time, please guide me. I am using a Windows environment. Thank you very much!
I am using a Windows environment. Thank you very much!
Unfortunately Windows not very friendly with this. You can try WSL but I'm unsure neither
Thanks for the feedback everyone. I hope it has been made clear that a very small payment is required if you train using Spaces (with the improved wording in the beginning) - as well as now there's a full train locally alternative for those who wish that route.