Reimu Hakurei
commited on
Commit
•
d54a055
1
Parent(s):
9e45f8d
Update README.md
Browse files
README.md
CHANGED
@@ -11,25 +11,23 @@ inference: false
|
|
11 |
|
12 |
# waifu-diffusion - Diffusion for Weebs
|
13 |
|
14 |
-
waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through
|
15 |
|
16 |
## Model Description
|
17 |
|
18 |
-
The model
|
19 |
|
20 |
-
The current model is
|
21 |
|
22 |
With [Textual Inversion](https://github.com/rinongal/textual_inversion), the embeddings for the text encoder has been trained to align more with anime-styled images, reducing excessive prompting.
|
23 |
|
24 |
## Training Data & Annotative Prompting
|
25 |
|
26 |
-
The data used for
|
27 |
-
|
28 |
-
Then, the embeddings were further tuned on a smaller subset of 2k higher quality aesthetic images which had an aesthetic score greater than `6.0` and featured diverse subjects, backgrounds, and compositions.
|
29 |
|
30 |
## Downstream Uses
|
31 |
|
32 |
-
This model can be used for entertainment purposes and as a generative art assistant.
|
33 |
|
34 |
## Example Code
|
35 |
|
@@ -54,9 +52,7 @@ image.save("reimu_hakurei.png")
|
|
54 |
|
55 |
## Team Members and Acknowledgements
|
56 |
|
57 |
-
This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/)
|
58 |
-
|
59 |
-
Additionally, the methods presented in the [Textual Inversion](https://github.com/rinongal/textual_inversion) repo was an incredible help.
|
60 |
|
61 |
- [Anthony Mercurio](https://github.com/harubaru)
|
62 |
- [Salt](https://github.com/sALTaccount/)
|
|
|
11 |
|
12 |
# waifu-diffusion - Diffusion for Weebs
|
13 |
|
14 |
+
waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning on high quality anime images.
|
15 |
|
16 |
## Model Description
|
17 |
|
18 |
+
The model used for fine-tuning is [Stable Diffusion V1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), which is a latent text-to-image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en).
|
19 |
|
20 |
+
The current model is fine-tuned from 56 thousand images from Danbooru selected with an aesthetic score greater than `6.0`.
|
21 |
|
22 |
With [Textual Inversion](https://github.com/rinongal/textual_inversion), the embeddings for the text encoder has been trained to align more with anime-styled images, reducing excessive prompting.
|
23 |
|
24 |
## Training Data & Annotative Prompting
|
25 |
|
26 |
+
The data used for fine-tuning has come from a random sample of 56k Danbooru images, which were filtered based on [CLIP Aesthetic Scoring](https://github.com/christophschuhmann/improved-aesthetic-predictor) where only images with an aesthetic score greater than `6.0` were used.
|
|
|
|
|
27 |
|
28 |
## Downstream Uses
|
29 |
|
30 |
+
This model can be used for entertainment purposes and as a generative art assistant. The EMA model can be used for additional fine-tuning.
|
31 |
|
32 |
## Example Code
|
33 |
|
|
|
52 |
|
53 |
## Team Members and Acknowledgements
|
54 |
|
55 |
+
This project would not have been possible without the incredible work by the [CompVis Researchers](https://ommer-lab.com/).
|
|
|
|
|
56 |
|
57 |
- [Anthony Mercurio](https://github.com/harubaru)
|
58 |
- [Salt](https://github.com/sALTaccount/)
|