DreamTuner: Single Image is Enough for Subject-Driven Generation
Abstract
Diffusion-based models have demonstrated impressive capabilities for text-to-image generation and are expected for personalized applications of subject-driven generation, which require the generation of customized concepts with one or a few reference images. However, existing methods based on fine-tuning fail to balance the trade-off between subject learning and the maintenance of the generation capabilities of pretrained models. Moreover, other methods that utilize additional image encoders tend to lose important details of the subject due to encoding compression. To address these challenges, we propose DreamTurner, a novel method that injects reference information from coarse to fine to achieve subject-driven image generation more effectively. DreamTurner introduces a subject-encoder for coarse subject identity preservation, where the compressed general subject features are introduced through an attention layer before visual-text cross-attention. We then modify the self-attention layers within pretrained text-to-image models to self-subject-attention layers to refine the details of the target subject. The generated image queries detailed features from both the reference image and itself in self-subject-attention. It is worth emphasizing that self-subject-attention is an effective, elegant, and training-free method for maintaining the detailed features of customized subjects and can serve as a plug-and-play solution during inference. Finally, with additional subject-driven fine-tuning, DreamTurner achieves remarkable performance in subject-driven image generation, which can be controlled by a text or other conditions such as pose. For further details, please visit the project page at https://dreamtuner-diffusion.github.io/.
Community
Will you ever release the code for this or will it be like the other projects that promise similar things?
Will you ever release the code for this or will it be like the other projects that promise similar things?
No
I can't see you being mentioned in the author list, so I can't assume your answer is official. From where do you have this information?
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SSR-Encoder: Encoding Selective Subject Representation for Subject-Driven Generation (2023)
- Decoupled Textual Embeddings for Customized Image Generation (2023)
- HiFi Tuner: High-Fidelity Subject-Driven Fine-Tuning for Diffusion Models (2023)
- VideoBooth: Diffusion-based Video Generation with Image Prompts (2023)
- DreamVideo: Composing Your Dream Videos with Customized Subject and Motion (2023)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
DreamTuner: Creating Photo-Realistic Images from a Single Reference Picture
Links 🔗:
👉 Subscribe: https://www.youtube.com/@Arxflix
👉 Twitter: https://x.com/arxflix
👉 LMNT (Partner): https://lmnt.com/
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper