Update README.md
Browse files
README.md
CHANGED
@@ -10,7 +10,7 @@ language:
|
|
10 |
library_name: diffusers
|
11 |
---
|
12 |
|
13 |
-
#
|
14 |
|
15 |
## IMAGDressing-v1: Customizable Virtual Dressing
|
16 |
|
@@ -26,5 +26,6 @@ library_name: diffusers
|
|
26 |
|
27 |
To address the need for flexible and controllable customizations in virtual try-on systems, we propose IMAGDressing-v1. Specifically, we introduce a garment UNet that captures semantic features from CLIP and texture features from VAE. Our hybrid attention module includes a frozen self-attention and a trainable cross-attention, integrating these features into a frozen denoising UNet to ensure user-controlled editing. We will release a comprehensive dataset, IGv1, with over 200,000 pairs of clothing and dressed images, and establish a standard data assembly pipeline. Furthermore, IMAGDressing-v1 can be combined with extensions like ControlNet, IP-Adapter, T2I-Adapter, and AnimateDiff to enhance diversity and controllability.
|
28 |
|
|
|
29 |
|
30 |
|
|
|
10 |
library_name: diffusers
|
11 |
---
|
12 |
|
13 |
+
# IMAGDressing: Interactive Modular Apparel Generation for Dressing
|
14 |
|
15 |
## IMAGDressing-v1: Customizable Virtual Dressing
|
16 |
|
|
|
26 |
|
27 |
To address the need for flexible and controllable customizations in virtual try-on systems, we propose IMAGDressing-v1. Specifically, we introduce a garment UNet that captures semantic features from CLIP and texture features from VAE. Our hybrid attention module includes a frozen self-attention and a trainable cross-attention, integrating these features into a frozen denoising UNet to ensure user-controlled editing. We will release a comprehensive dataset, IGv1, with over 200,000 pairs of clothing and dressed images, and establish a standard data assembly pipeline. Furthermore, IMAGDressing-v1 can be combined with extensions like ControlNet, IP-Adapter, T2I-Adapter, and AnimateDiff to enhance diversity and controllability.
|
28 |
|
29 |
+
![framework](assets/pipeline.png)
|
30 |
|
31 |
|