dverdu-freepik commited on
Commit
e45409a
1 Parent(s): abf16b5

fix: Update README.md

Browse files
Files changed (2) hide show
  1. README.md +20 -17
  2. sample_images/models_comparison.png +3 -0
README.md CHANGED
@@ -19,25 +19,12 @@ We want to announce the alpha version of our new distilled Flux.1 Lite model, an
19
 
20
  Our goal is to further reduce FLUX.1-dev transformer parameters up to 24Gb to make it compatible with most of GPU cards.
21
 
22
-
23
- ## News🔥🔥🔥
24
- * Oct.18, 2024. Alpha 8B checkpoint and comparison demo 🤗 (i.e. [Flux.1 Lite](https://huggingface.co/spaces/Freepik/flux.1-lite)) is publicly available on [HuggingFace Repo](https://huggingface.co/Freepik/flux.1-lite-8B-alpha).
25
-
26
- ## Try our Hugging Face demos:
27
- Flux.1 Lite demo host on [🤗 flux.1-lite](https://huggingface.co/spaces/Freepik/flux.1-lite)
28
-
29
- ## Introduction
30
-
31
- Hyper-SD is one of the new State-of-the-Art diffusion model acceleration techniques.
32
- In this repository, we release the models distilled from [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev), [SD3-Medium](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers), [SDXL Base 1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and [Stable-Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)。
33
-
34
- ## Checkpoints
35
-
36
- * `flux.1-lite-8B-alpha.safetensors`: Transformer checkpoint, in Flux original format.
37
 
38
  ## Text-to-Image Usage
39
 
40
- ### FLUX.1-dev-related models
 
41
  ```python
42
  import torch
43
  from diffusers import FluxPipeline
@@ -69,4 +56,20 @@ with torch.inference_mode():
69
  width=1024,
70
  ).images[0]
71
  image.save("output.png")
72
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
20
  Our goal is to further reduce FLUX.1-dev transformer parameters up to 24Gb to make it compatible with most of GPU cards.
21
 
22
+ ![Flux.1 Lite vs FLUX.1-dev](./sample_images/models_comparison.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
  ## Text-to-Image Usage
25
 
26
+ It is recommended to use a `guidance_scale` of 3.5 and a `n_steps` between 22 and 30 for best results.
27
+
28
  ```python
29
  import torch
30
  from diffusers import FluxPipeline
 
56
  width=1024,
57
  ).images[0]
58
  image.save("output.png")
59
+ ```
60
+
61
+ ## Checkpoints
62
+
63
+ * `flux.1-lite-8B-alpha.safetensors`: Transformer checkpoint, in Flux original format.
64
+ * `transformers/`: Contains distilled 8B transformer model, in diffusers format.
65
+
66
+ ## Try our Hugging Face demos:
67
+ Flux.1 Lite demo host on [🤗 flux.1-lite](https://huggingface.co/spaces/Freepik/flux.1-lite)
68
+
69
+ ## News🔥🔥🔥
70
+ * Oct.18, 2024. Alpha 8B checkpoint and comparison demo 🤗 (i.e. [Flux.1 Lite](https://huggingface.co/spaces/Freepik/flux.1-lite)) is publicly available on [HuggingFace Repo](https://huggingface.co/Freepik/flux.1-lite-8B-alpha).
71
+
72
+
73
+
74
+
75
+
sample_images/models_comparison.png ADDED

Git LFS Details

  • SHA256: 7479da7876229004b4eef834734eb18f64aa3874eb607dbf0ca0f69c0a865436
  • Pointer size: 132 Bytes
  • Size of remote file: 5.9 MB