Pattern
What is your cfg? Try a lower cfg such as 2-3
Are you generating with comfyui? Because I also get patterns when I use the wrong sampler / scheduler. Aditionally, try to generate square images with the res 1024x1024 and 4 steps.
That looks more like Jpeg noise -- something probably with the sampler, but i'd laugh if somehow byte dance's distill is full of artifacts.
Here is square 2/3/4:
There is still some, but being smaller, it looks more like grain.
I'm using the code from the model-card inside my Blender add-on, called Pallaidium(I'm now testing with the same scheduler as in your model card): https://github.com/tin2tin/Pallaidium
(It's Diffusers based).
Adding this seems to improve this problem:
from Diffusers import AutoencoderKL
vae = AutoencoderKL.from_pretrained(
"madebyollin/sdxl-vae-fp16-fix",
torch_dtype=torch.float16,
local_files_only=local_files_only,
)
pipe = AutoPipelineForText2Image.from_pretrained('Lykon/dreamshaper-xl-lightning', torch_dtype=torch.float16, variant="fp16", vae=vae)
Ah yes the VAE can cause such issues - kinda forgot about that
OH also if you're adding EXTRA noise during hires or whatever the setting was in Auto or even in diffusers: Don't.
So if it isnt' the vae, that could be the other thing.
I noticed this once when i was testing my own content, not just Lykon's.
please use ComfyUI :)
I'm trying to give Blender users the ability to use your model through my Blender add-on: https://github.com/tin2tin/Pallaidium
yeah but comfy inference is much better and it's the official StabilityAI library. You should use a comfy backend and make an interface node.
Btw, @tintwotin I use ComfyUi myself and I also generate through a different application. I just use ComfyUi as the backend and use it via api. Works perfectly fine
Try using DPMSolverSinglestepScheduler
, I think it's better but still not as good as Comfy