TextBoost: Towards One-Shot Personalization of Text-to-Image Models via Fine-tuning Text Encoder
Abstract
Recent breakthroughs in text-to-image models have opened up promising research avenues in personalized image generation, enabling users to create diverse images of a specific subject using natural language prompts. However, existing methods often suffer from performance degradation when given only a single reference image. They tend to overfit the input, producing highly similar outputs regardless of the text prompt. This paper addresses the challenge of one-shot personalization by mitigating overfitting, enabling the creation of controllable images through text prompts. Specifically, we propose a selective fine-tuning strategy that focuses on the text encoder. Furthermore, we introduce three key techniques to enhance personalization performance: (1) augmentation tokens to encourage feature disentanglement and alleviate overfitting, (2) a knowledge-preservation loss to reduce language drift and promote generalizability across diverse prompts, and (3) SNR-weighted sampling for efficient training. Extensive experiments demonstrate that our approach efficiently generates high-quality, diverse images using only a single reference image while significantly reducing memory and storage requirements.
Community
Very interesting job. Does the training process involve different data augmentation methods? If so, do they correspond to different enhanced pseudo-words A*?
Thank you for your interest in our work :) We indeed applied various types of augmentations, including a range of geometric and color transformations (Figure 10). As you kindly mentioned, each augmentation corresponds to a specific A*. For more details on the technical implementation, please feel free to visit our GitHub repository!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CoRe: Context-Regularized Text Embedding Learning for Text-to-Image Personalization (2024)
- DiffLoRA: Generating Personalized Low-Rank Adaptation Weights with Diffusion (2024)
- CustomCrafter: Customized Video Generation with Preserving Motion and Concept Composition Abilities (2024)
- Subject-driven Text-to-Image Generation via Preference-based Reinforcement Learning (2024)
- PreciseControl: Enhancing Text-To-Image Diffusion Models with Fine-Grained Attribute Control (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper