Spaces:
Runtime error
Demo not working
Hi @merve @hysts , I have been trying to load the demo for quite sometime and it does not seem to work.
I do see "--> Pushing image
DONE 264.3s
--> Exporting cache
DONE 0.7s
" in the Build tab but nothing is visible on the demo itself.
I tried the factory rebuild feature too, but still does not seem to work. Is this a GPU issue? I wanted to show this demo in the CVPR in 2 weeks, just wanted to get this up and running.
I do not see any error. The space has been starting for past 4-5 hours. I think it should not take this long.
Well, it seems that the Space is taking long to start. I found your Space contains large model weights here, but it would make the docker image of the Space larger and slow down the startup significantly. I'm not 100% sure, but I think it would become faster if you use the huggingface_hub
library to download your models. (docs)
Oh, is it taking that long? Hmm, I have no idea why, but I think something is wrong.
I duplicated this Space, but it took more than 15 minutes to build and it's stuck at the startup for me too. I asked about this issue in our internal Slack channel, but it may take some time for people to respond due to the time zone difference.
thank you.
@hbXNov
Can you create two model repositories for mplugowl7bvideo
and owl-con
and move the weight files to them?
You can download your models using the huggingface_hub
library at startup by replacing these lines with something like this:
pretrained_ckpt = "hbXNov/mplugowl7bvideo"
trained_ckpt = hf_hub_download(repo_id="hbXNov/owl-con", subfolder="checkpoint-5178", filename="pytorch_model.bin", repo_type="model")
(Assuming you created two model repos named mplugowl7bvideo
and owl-con
under your profile.)
The reason your Space didn't start properly was that our free ephemeral storage of Spaces is 50GB but your Space repo is larger than that. (The total size of your models is 30GB+ and .git
keeps copies of those files, so the total repo size is over 60GB.)
I'm not sure why your Space has been successfully up before, but at least now we know the reason and the solution of the current error.
got it, i will make the required fixes! tysm :)