Spaces:
Running
As a work-around, this would be fine...
This is a workaround for a problem where the size of the fp16 file created stays the same as the fp32... same problem in the SDXL conversion space and local environment. I think it was happening at least about 6 months ago when I first converted my files... I forgot to report this.
https://huggingface.co/spaces/diffusers/sdxl-to-diffusers
The workaround is fine with code like this, but the real problem is that .to(torch.float16) doesn't seem to work.
I have not verified whether the file size is still 32 bits but only the internal precision is 16 bits, but perhaps it is not working as expected.
If this is indeed a bug, it would be better to fix that bug rather than merge this commit.
The torch_dtype=torch.float16 at load time works fine, so this workaround works.
Before fix
https://huggingface.co/John6666/convtest2